• Title/Summary/Keyword: Contrast noise rate

Search Result 51, Processing Time 0.027 seconds

Performance of Run-length Limited Coded Parity of Soft LDPC Code for Perpendicular Magnetic Recording Channel (런-길이 제한 부호를 패리티로 사용한 연판정 LDPC 부호의 수직자기기록 채널 성능)

  • Kim, Jinyoung;Lee, Jaejin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.9
    • /
    • pp.744-749
    • /
    • 2013
  • We propose soft user data input on LDPC codes with parity encoded by the (1, 7) run length limited (RLL) code for perpendicular magnetic recording channel. The user data are encoded by maximum transition run (MTR) (3;11) code. In order to minimize the loss of code rate, the (1, 7) RLL code only encode the parity of LDPC. Also, to increase performance, we propose only user data part applied soft output Viterbi algorithm (SOVA). The performance using the SOVA showed good performance lower than 26 dB. In contrast, it showed worse performance high than 26 dB. This is because of incorrect soft information by high jitter noise and two different input types for LDPC decoder.

CA Joint Resource Allocation Algorithm Based on QoE Weight

  • LIU, Jun-Xia;JIA, Zhen-Hong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2233-2252
    • /
    • 2018
  • For the problem of cross-layer joint resource allocation (JRA) in the Long-Term Evolution (LTE)-Advanced standard using carrier aggregation (CA) technology, it is difficult to obtain the optimal resource allocation scheme. This paper proposes a joint resource allocation algorithm based on the weights of user's average quality of experience (JRA-WQOE). In contrast to prevalent algorithms, the proposed method can satisfy the carrier aggregation abilities of different users and consider user fairness. An optimization model is established by considering the user quality of experience (QoE) with the aim of maximizing the total user rate. In this model, user QoE is quantified by the mean opinion score (MOS) model, where the average MOS value of users is defined as the weight factor of the optimization model. The JRA-WQOE algorithm consists of the iteration of two algorithms, a component carrier (CC) and resource block (RB) allocation algorithm called DABC-CCRBA and a subgradient power allocation algorithm called SPA. The former is used to dynamically allocate CC and RB for users with different carrier aggregation capacities, and the latter, which is based on the Lagrangian dual method, is used to optimize the power allocation process. Simulation results showed that the proposed JRA-WQOE algorithm has low computational complexity and fast convergence. Compared with existing algorithms, it affords obvious advantages such as improving the average throughput and fairness to users. With varying numbers of users and signal-to-noise ratios (SNRs), the proposed algorithm achieved higher average QoE values than prevalent algorithms.

A Hippocampus Segmentation in Brain MR Images using Level-Set Method (레벨 셋 방법을 이용한 뇌 MR 영상에서 해마영역 분할)

  • Lee, Young-Seung;Choi, Heung-Kook
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.9
    • /
    • pp.1075-1085
    • /
    • 2012
  • In clinical research using medical images, the image segmentation is one of the most important processes. Especially, the hippocampal atrophy is helpful for the clinical Alzheimer diagnosis as a specific marker of the progress of Alzheimer. In order to measure hippocampus volume exactly, segmentation of the hippocampus is essential. However, the hippocampus has some features like relatively low contrast, low signal-to-noise ratio, discreted boundary in MRI images, and these features make it difficult to segment hippocampus. To solve this problem, firstly, We selected region of interest from an experiment image, subtracted a original image from the negative image of the original image, enhanced contrast, and applied anisotropic diffusion filtering and gaussian filtering as preprocessing. Finally, We performed an image segmentation using two level set methods. Through a variety of approaches for the validation of proposed hippocampus segmentation method, We confirmed that our proposed method improved the rate and accuracy of the segmentation. Consequently, the proposed method is suitable for segmentation of the area which has similar features with the hippocampus. We believe that our method has great potential if successfully combined with other research findings.

Comparison of Collimator Choice on Image Quality of I-131 in SPECT/CT (I-131 SPECT/CT 검사의 에서 조준기 종류에 따른 영상 비교 평가)

  • Kim, Jung Yul;Kim, Joo Yeon;Nam-Koong, Hyuk;Kang, Chun Goo;Kim, Jae Sam
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.33-42
    • /
    • 2014
  • Purpose: I-131 scan using High Energy (HE) collimator is generally used. While, Medium Energy (ME) collimator is not suggested to use in result of an excessive septal penetration effects, it is used to improve the sensitivities of count rate on lower dose of I-131. This research aims to evaluate I-131 SPECT/CT image quality using by HE and ME collimator and also find out the possibility of ME collimator clinical application. Materials and Methods: ME and HE collimator are substituted as Siemens symbia T16 SPECT/CT, using I-131 point source and NEMA NU-2 IQ phantom. Single Energy Window (SEW) and Triple Energy Windows (TEW) are applied for image acquisition and images with CTAC and Scatter correction application or not, applied different number of iteration and sub set are reconstructed by IR method, flash 3D. By analysis of acquired image, the comparison on sensitivities, contrast, noise and aspect ratio of two collimators are able to be evaluated. Results: ME Collimator is ahead of HE collimator in terms of sensitivity (ME collimator: 188.18 cps/MBq, HE collimator: 46.31 cps/MBq). For contrast, reconstruction image used by HE collimator with TEW, 16 subset 8 iteration applied CTAC is shown the highest contrast (TCQI=190.64). In same condition, ME collimator has lower contrast than HE collimator (TCQI=66.05). The lowest aspect ratio for ME collimator and HE collimator are 1.065 with SEW, CTAC (+) and 1.024 with TEW, CTAC (+) respectively. Conclusion: Selecting a proper collimator is important factor for image quality. This research finding tells that HE collimator, which is generally used for I-131 scan emitted high energy ${\gamma}$-ray is the most recommendable collimator for image quality. However, ME collimator is also applicable in condition of lower dose, lower sensitive if utilizing energy window, matrix size, IR parameter, CTAC and scatter correction appropriately.

  • PDF

Feasibility Study of Different Biochars as Adsorbent for Cadmium and Lead

  • Kim, In Ja;Kim, Rog-Young;Kim, Ji In;Kim, Hyoung Seop;Noh, Hoe-Jung;Kim, Tae Seung;Yoon, Jeong-Ki;Park, Gyoung-Hun;Ok, Yong Sik;Jung, Hyun-Sung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.48 no.5
    • /
    • pp.332-339
    • /
    • 2015
  • The objective of this study was to evaluate the effectiveness of different biochars on the removal of heavy metals from aqueous media. The experiment was carried out in aqueous solutions containing $200mg\;CdL^{-1}$ or $200mg\;PbL^{-1}$ using two different biochars derived from soybean stover and orange peel (20 mg Cd or $Pbg^{-1}$ biochar). After shaking for 24 hours, biochars were filtered out, and Cd and Pb in the filtrate were analyzed by flame atomic absorption spectrophotometer (FAAS). In order to provide information regarding metal binding strength on biochars, sequential extraction was performed by modified SM&T (formerly BCR). The results showed that 70~100% of initially added Cd and Pb was adsorbed on biochars and removed from aqueous solution. The removal rate of Pb (95%, 100%) was higher than that of Cd (70%, 91%). In the case of Cd, orange peel derived biochar (91%) showed higher adsorption rate than soybean stover derived biochar (70%). Cd was adsorbed on the biochar mainly in exchangeable and carbonates fraction (1st phase). In contrast, Pb was adsorbed on it mainly in the form of Fe-Mn oxides and residual fraction (2nd and 4th phase). The existence of Cd and Pb as a form of surface-precipitated complex was also observed on the surfaces of biochars detected by field emission scanning electron microscope (FESEM) and energy dispersive X-ray spectrometer (EDAX).

Shear-wave elasticity imaging with axial sub-Nyquist sampling (축방향 서브 나이퀴스트 샘플링 기반의 횡탄성 영상 기법)

  • Woojin Oh;Heechul Yoon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.5
    • /
    • pp.403-411
    • /
    • 2023
  • Functional ultrasound imaging, such as elasticity imaging and micro-blood flow Doppler imaging, enhances diagnostic capability by providing useful mechanical and functional information about tissues. However, the implementation of functional ultrasound imaging poses limitations such as the storage of vast amounts of data in Radio Frequency (RF) data acquisition and processing. In this paper, we propose a sub-Nyquist approach that reduces the amount of acquired axial samples for efficient shear-wave elasticity imaging. The proposed method acquires data at a sampling rate one-third lower than the conventional Nyquist sampling rate and tracks shear-wave signals through RF signals reconstructed using band-pass filtering-based interpolation. In this approach, the RF signal is assumed to have a fractional bandwidth of 67 %. To validate the approach, we reconstruct the shear-wave velocity images using shear-wave tracking data obtained by conventional and proposed approaches, and compare the group velocity, contrast-to-noise ratio, and structural similarity index measurement. We qualitatively and quantitatively demonstrate the potential of sub-Nyquist sampling-based shear-wave elasticity imaging, indicating that our approach could be practically useful in three-dimensional shear-wave elasticity imaging, where a massive amount of ultrasound data is required.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Variation on Estimated Values of Radioactivity Concentration According to the Change of the Acquisition Time of SPECT/CT (SPECT/CT의 획득시간 증감에 따른 방사능농도 추정치의 변화)

  • Kim, Ji-Hyeon;Lee, Jooyoung;Son, Hyeon-Soo;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.25 no.2
    • /
    • pp.15-24
    • /
    • 2021
  • Purpose SPECT/CT was noted for its excellent correction method and qualitative functions based on fusion images in the early stages of dissemination, and interest in and utilization of quantitative functions has been increasing with the recent introduction of companion diagnostic therapy(Theranostics). Unlike PET/CT, various conditions like the type of collimator and detector rotation are a challenging factor for image acquisition and reconstruction methods at absolute quantification of SPECT/CT. Therefore, in this study, We want to find out the effect on the radioactivity concentration estimate by the increase or decrease of the total acquisition time according to the number of projections and the acquisition time per projection among SPECT/CT imaging conditions. Materials and Methods After filling the 9,293 ml cylindrical phantom with sterile water and diluting 99mTc 91.76 MBq, the standard image was taken with a total acquisition time of 600 sec (10 sec/frame × 120 frames, matrix size 128 × 128) and also volume sensitivity and the calibration factor was verified. Based on the standard image, the comparative images were obtained by increasing or decreasing the total acquisition time. namely 60 (-90%), 150 (-75%), 300 (-50%), 450 (-25%), 900 (+50%), and 1200 (+100%) sec. For each image detail, the acquisition time(sec/frame) per projection was set to 1.0, 2.5, 5.0, 7.5, 15.0 and 20.0 sec (fixed number of projections: 120 frame) and the number of projection images was set to 12, 30, 60, 90, 180 and 240 frames(fixed time per projection:10 sec). Based on the coefficients measured through the volume of interest in each acquired image, the percentage of variation about the contrast to noise ratio (CNR) was determined as a qualitative assessment, and the quantitative assessment was conducted through the percentage of variation of the radioactivity concentration estimate. At this time, the relationship between the radioactivity concentration estimate (cps/ml) and the actual radioactivity concentration (Bq/ml) was compared and analyzed using the recovery coefficient (RC_Recovery Coefficients) as an indicator. Results The results [CNR, radioactivity Concentration, RC] by the change in the number of projections for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.5%, +3.90%, 1.04] at -90%, [-77.9%, +2.71%, 1.03] at -75%, [-55.6%, +1.85%, 1.02] at -50%, [-33.6%, +1.37%, 1.01] at -25%, [-33.7%, +0.71%, 1.01] at +50%, [+93.2%, +0.32%, 1.00] at +100%. and also The results [CNR, radioactivity Concentration, RC] by the acquisition time change for each increase or decrease rate (-90%, -75%, -50%, -25%, +50%, +100%) of total acquisition time are as follows. [-89.3%, -3.55%, 0.96] at - 90%, [-73.4%, -0.17%, 1.00] at -75%, [-49.6%, -0.34%, 1.00] at -50%, [-24.9%, 0.03%, 1.00] at -25%, [+49.3%, -0.04%, 1.00] at +50%, [+99.0%, +0.11%, 1.00] at +100%. Conclusion In SPECT/CT, the total coefficient obtained according to the increase or decrease of the total acquisition time and the resulting image quality (CNR) showed a pattern that changed proportionally. On the other hand, quantitative evaluations through absolute quantification showed a change of less than 5% (-3.55 to +3.90%) under all experimental conditions, maintaining quantitative accuracy (RC 0.96 to 1.04). Considering the reduction of the total acquisition time rather than the increasing of the image acquiring time, The reduction in total acquisition time is applicable to quantitative analysis without significant loss and is judged to be clinically effective. This study shows that when increasing or decreasing of total acquisition time, changes in acquisition time per projection have fewer fluctuations that occur in qualitative and quantitative condition changes than the change in the number of projections under the same scanning time conditions.

Fully Automatic Coronary Calcium Score Software Empowered by Artificial Intelligence Technology: Validation Study Using Three CT Cohorts

  • June-Goo Lee;HeeSoo Kim;Heejun Kang;Hyun Jung Koo;Joon-Won Kang;Young-Hak Kim;Dong Hyun Yang
    • Korean Journal of Radiology
    • /
    • v.22 no.11
    • /
    • pp.1764-1776
    • /
    • 2021
  • Objective: This study aimed to validate a deep learning-based fully automatic calcium scoring (coronary artery calcium [CAC]_auto) system using previously published cardiac computed tomography (CT) cohort data with the manually segmented coronary calcium scoring (CAC_hand) system as the reference standard. Materials and Methods: We developed the CAC_auto system using 100 co-registered, non-enhanced and contrast-enhanced CT scans. For the validation of the CAC_auto system, three previously published CT cohorts (n = 2985) were chosen to represent different clinical scenarios (i.e., 2647 asymptomatic, 220 symptomatic, 118 valve disease) and four CT models. The performance of the CAC_auto system in detecting coronary calcium was determined. The reliability of the system in measuring the Agatston score as compared with CAC_hand was also evaluated per vessel and per patient using intraclass correlation coefficients (ICCs) and Bland-Altman analysis. The agreement between CAC_auto and CAC_hand based on the cardiovascular risk stratification categories (Agatston score: 0, 1-10, 11-100, 101-400, > 400) was evaluated. Results: In 2985 patients, 6218 coronary calcium lesions were identified using CAC_hand. The per-lesion sensitivity and false-positive rate of the CAC_auto system in detecting coronary calcium were 93.3% (5800 of 6218) and 0.11 false-positive lesions per patient, respectively. The CAC_auto system, in measuring the Agatston score, yielded ICCs of 0.99 for all the vessels (left main 0.91, left anterior descending 0.99, left circumflex 0.96, right coronary 0.99). The limits of agreement between CAC_auto and CAC_hand were 1.6 ± 52.2. The linearly weighted kappa value for the Agatston score categorization was 0.94. The main causes of false-positive results were image noise (29.1%, 97/333 lesions), aortic wall calcification (25.5%, 85/333 lesions), and pericardial calcification (24.3%, 81/333 lesions). Conclusion: The atlas-based CAC_auto empowered by deep learning provided accurate calcium score measurement as compared with manual method and risk category classification, which could potentially streamline CAC imaging workflows.

Usefulness of Deep Learning Image Reconstruction in Pediatric Chest CT (소아 흉부 CT 검사 시 딥러닝 영상 재구성의 유용성)

  • Do-Hun Kim;Hyo-Yeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.17 no.3
    • /
    • pp.297-303
    • /
    • 2023
  • Pediatric Computed Tomography (CT) examinations can often result in exam failures or the need for frequent retests due to the difficulty of cooperation from young patients. Deep Learning Image Reconstruction (DLIR) methods offer the potential to obtain diagnostically valuable images while reducing the retest rate in CT examinations of pediatric patients with high radiation sensitivity. In this study, we investigated the possibility of applying DLIR to reduce artifacts caused by respiration or motion and obtain clinically useful images in pediatric chest CT examinations. Retrospective analysis was conducted on chest CT examination data of 43 children under the age of 7 from P Hospital in Gyeongsangnam-do. The images reconstructed using Filtered Back Projection (FBP), Adaptive Statistical Iterative Reconstruction (ASIR-50), and the deep learning algorithm TrueFidelity-Middle (TF-M) were compared. Regions of interest (ROI) were drawn on the right ascending aorta (AA) and back muscle (BM) in contrast-enhanced chest images, and noise (standard deviation, SD) was measured using Hounsfield units (HU) in each image. Statistical analysis was performed using SPSS (ver. 22.0), analyzing the mean values of the three measurements with one-way analysis of variance (ANOVA). The results showed that the SD values for AA were FBP=25.65±3.75, ASIR-50=19.08±3.93, and TF-M=17.05±4.45 (F=66.72, p=0.00), while the SD values for BM were FBP=26.64±3.81, ASIR-50=19.19±3.37, and TF-M=19.87±4.25 (F=49.54, p=0.00). Post-hoc tests revealed significant differences among the three groups. DLIR using TF-M demonstrated significantly lower noise values compared to conventional reconstruction methods. Therefore, the application of the deep learning algorithm TrueFidelity-Middle (TF-M) is expected to be clinically valuable in pediatric chest CT examinations by reducing the degradation of image quality caused by respiration or motion.