• Title/Summary/Keyword: Noise Level

Search Result 3,738, Processing Time 0.041 seconds

Canine MR Images from 3T Active-Shield MRI System (3T 능동차폐형 자기공명영상 장비로부터 얻어진 개의 자기공명영상)

  • Choe, Bo-Young;Park, Chi-Bong;Kang, Sei-Kwon;Chu, Myoung-Ja;Kim, Euy-Neyng;Lee, Hyoung-Koo;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.12 no.2
    • /
    • pp.113-124
    • /
    • 2001
  • For veterinary imaging diagnosis, we obtained MR images of the canine brain, spine, kidney and pelvis from 3T MRI system which was equipped with the world first 3T active shield magnet. Spin echo (SE) and fast Spin Echo (FSE) images were obtained from the canine brain, spine, kidney and pelvis of normal and sick dogs using a homemade birdcage and transverse electromagnetic (TEM) resonators operating in quadrature and tuned to 128 MHz. In addition, we employed a homemade saddle shaped RF coil. Typical common acquisition parameters were as follows: matrix=512$\times$512, field of view (FOV)=20cm, slice thickness=3 w, number of excitations (NEX)=1. For T1-weighted MR images, we used TR=500 ms, TE=10 or 17.4 ms. For T2-weighted MR images, we used TR=4000 ms, TE=108 ms. Signal to noise ratio (SNR) of 3T system was measured 2.7 times greater than that of prevalent 1.57 system. The high resolution images acquired in this study represent more than a 4-fold increase in in-plane resolution relative to conventional images obtained with a 20 cm field of view and a 5 mm slice thickness. MR images obtained from 3T system revealed numerous small venous structures throughout the image plane and provided reasonable delineation between gray and white matter The present results demonstrate that the MR images from 3T system could provide better diagnostic quality of resolution and sensitivity than those of 1.5T system. The elevated SNR observed in the 3T high field magnetic resonance imaging can be utilized to acquire images with a level of resolution approaching the microscopic structural level under in vivo conditions. These images represent a significant advance in our ability to examine small anatomical features with noninvasive imaging methods. Moreover, MRI technique could begin to apply for veterinary medicine in Korea.

  • PDF

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Clinical Utility of Turbo Contrase-Enhanced MR Angiography for the Major Branches of the Aortic Arch (대동맥궁 주요 분지들의 고속 조영증강 자기공명혈관조영술의 임상적 유용성)

  • Su Ok Seong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.2 no.1
    • /
    • pp.96-103
    • /
    • 1998
  • Purpose : To assess the clinical utility of turbo contrast-enhanced magnetic resonance angiography(CE MRA) in the evaluation of the aortic arch and its major branches and to compare the image quality of CE MRA among different coils used. Materials and Methods : Turbo three-phase dynamic CE MRA encompassing aortic arch and its major branches was prospectively performed after manual bolus IV injection of contrast material in 29 patients with suspected cerebrovascular diseases at 1.0T MR unit. the raw data were obtained with 3-D FISH sequence (TR 5.4ms, TE 2.3ms, flip angle 30, slab thickness 80nm, effective slice thickness 4.0mm, matrix size $100{\times}256$, FOV 280mm). Total data acquisition time was 4. to 60 seconds. We subjectively evaluated the imge quality with three-rating scheme : "good" for unequivocal normal finding, "fair" for relatively satisfactory quality to diagnose 'normal' despite intravascular low signal, and "poor" for equivocal diagnosis or non-visualization of the origin or segment of the vessels due to low signal or artifacts which needs catheter angiography. At the level of the carotid bifurcation, it was compared with conventional 2D-TOF MRA image. Overall image quality was also compared visually and quantitatively by measuring signal-to-noise ratios (SNRs) of the ascending aorta, the innominate artery and both common carotid arteries among the three different coils used(CP body array(n=12), CP neck array(n=9), and head-and-neck(n=8). Results : Demonstration of the aortic arch and its major branches was rated as "good" in 55% (16/29) and "fair" in 34%(10/29). At the level of the carotid bifurcation, image quality of turbo CE MRA was same as or better than conventional 2D-TOF MRA in 65% (17/26). Overall image quality and SNR were significantlygreater with CP body array coil than with CP neck array or head-and-neck coil. Conclusions : Turbo CE MRA can be used as a screening exam in the evaluation of the major branches of the aortic arch from their origin to the skull base. Overall imagequality appears to be better with CP body array coil than with CP neck array coil or head-and-neck coil.

  • PDF

A Fully Digital Automatic Gain Control System with Wide Dynamic Range Power Detectors for DVB-S2 Application (넓은 동적 영역의 파워 검출기를 이용한 DVB-S2용 디지털 자동 이득 제어 시스템)

  • Pu, Young-Gun;Park, Joon-Sung;Hur, Jeong;Lee, Kang-Yoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.9
    • /
    • pp.58-67
    • /
    • 2009
  • This paper presents a fully digital gain control system with a new high bandwidth and wide dynamic range power detector for DVB-S2 application. Because the peak-to-average power ratio (PAPR) of DVB-S2 system is so high and the settling time requirement is so stringent, the conventional closed-loop analog gain control scheme cannot be used. The digital gain control is necessary for the robust gain control and the direct digital interface with the baseband modem. Also, it has several advantages over the analog gain control in terms of the settling time and insensitivity to the process, voltage and temperature variation. In order to have a wide gain range with fine step resolution, a new AGC system is proposed. The system is composed of high-bandwidth digital VGAs, wide dynamic range power detectors with RMS detector, low power SAR type ADC, and a digital gain controller. To reduce the power consumption and chip area, only one SAR type ADC is used, and its input is time-interleaved based on four power detectors. Simulation and measurement results show that the new AGC system converges with gain error less than 0.25 dB to the desired level within $10{\mu}s$. It is implemented in a $0.18{\mu}m$ CMOS process. The measurement results of the proposed IF AGC system exhibit 80-dB gain range with 0.25-dB resolution, 8 nV/$\sqrt{Hz}$ input referred noise, and 5-dBm $IIP_3$ at 60-mW power consumption. The power detector shows the 35dB dynamic range for 100 MHz input.

A 10b 50MS/s Low-Power Skinny-Type 0.13um CMOS ADC for CIS Applications (CIS 응용을 위해 제한된 폭을 가지는 10비트 50MS/s 저 전력 0.13um CMOS ADC)

  • Song, Jung-Eun;Hwang, Dong-Hyun;Hwang, Won-Seok;Kim, Kwang-Soo;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.48 no.5
    • /
    • pp.25-33
    • /
    • 2011
  • This work proposes a skinny-type 10b 50MS/s 0.13um CMOS three-step pipeline ADC for CIS applications. Analog circuits for CIS applications commonly employ a high supply voltage to acquire a sufficiently acceptable dynamic range, while digital circuits use a low supply voltage to minimize power consumption. The proposed ADC converts analog signals in a wide-swing range to low voltage-based digital data using both of the two supply voltages. An op-amp sharing technique employed in residue amplifiers properly controls currents depending on the amplification mode of each pipeline stage, optimizes the performance of op-amps, and improves the power efficiency. In three FLASH ADCs, the number of input stages are reduced in half by the interpolation technique while each comparator consists of only a latch with low kick-back noise based on pull-down switches to separate the input nodes and output nodes. Reference circuits achieve a required settling time only with on-chip low-power drivers and digital correction logic has two kinds of level shifter depending on signal-voltage levels to be processed. The prototype ADC in a 0.13um CMOS to support 0.35um thick-gate-oxide transistors demonstrates the measured DNL and INL within 0.42LSB and 1.19LSB, respectively. The ADC shows a maximum SNDR of 55.4dB and a maximum SFDR of 68.7dB at 50MS/s, respectively. The ADC with an active die area of 0.53$mm^2$ consumes 15.6mW at 50MS/s with an analog voltage of 2.0V and two digital voltages of 2.8V ($=D_H$) and 1.2V ($=D_L$).

Quantitative Study of Annular Single-Crystal Brain SPECT (원형단일결정을 이용한 SPECT의 정량화 연구)

  • 김희중;김한명;소수길;봉정균;이종두
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 1998
  • Nuclear medicine emission computed tomography(ECT) can be very useful to diagnose early stage of neuronal diseases and to measure theraputic results objectively, if we can quantitate energy metabolism, blood flow, biochemical processes, or dopamine receptor and transporter using ECT. However, physical factors including attenuation, scatter, partial volume effect, noise, and reconstruction algorithm make it very difficult to quantitate independent of type of SPECT. In this study, we quantitated the effects of attenuation and scatter using brain SPECT and three-dimensional brain phantom with and without applying their correction methods. Dual energy window method was applied for scatter correction. The photopeak energy window and scatter energy window were set to 140ke${\pm}$10% and 119ke${\pm}$6% and 100% of scatter window data were subtracted from the photopeak window prior to reconstruction. The projection data were reconstructed using Butterworth filter with cutoff frequency of 0.95cycles/cm and order of 10. Attenuation correction was done by Chang's method with attenuation coefficients of 0.12/cm and 0.15/cm for the reconstruction data without scatter correction and with scatter correction, respectively. For quantitation, regions of interest (ROIs) were drawn on the three slices selected at the level of the basal ganglia. Without scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.2 and 2.1, respectively. However, the ratios between basal ganglia and background were very similar for with and without attenuation correction. With scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.69 and 2.64, respectively. These results indicate that the attenuation correction is necessary for the quantitation. When true ratios between basal ganglia and background were 6.58, 4.68, 1.86, the measured ratios with scatter and attenuation correction were 76%, 80%, 82% of their true ratios, respectively. The approximate 20% underestimation could be partially due to the effect of partial volume and reconstruction algorithm which we have not investigated in this study, and partially due to imperfect scatter and attenuation correction methods that we have applied in consideration of clinical applications.

  • PDF

Comparative Analysis of Satisfaction according to Opened-Fencing in Campus Afforestation Project Types - Focused on University in Seoul - (대학교 담장개방 녹화사업 유형에 따른 이용 만족도 비교 분석 - 서울 소재 대학 캠퍼스를 중심으로 -)

  • Lee, Se-Mi;Kim, Dong-Chan
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.39 no.6
    • /
    • pp.57-66
    • /
    • 2011
  • This study researched those universities for which fence opening and greening projects are being conducted by Seoul city. The forms of opened fences at 24 universities which have accomplished this project were classified into several types for each type of university, representative cases with many diverse facilities and active users were selected and investigated. The study was carried out using methods of field observations, literature review, and surveys. To maintain the confidentiality of the collected questionnaire analysis, the analysis of each type's usage frequency, overall satisfaction and a regression analysis with space environment and facilities, a one-way ANOVA for was used to validate the difference between types regarding satisfaction with the project. The results of usage type analysis were found to agree with the 3 analysis criteria-- installation location, user characteristics, and usage purpose--which were the legislative concepts. In overall satisfaction with facilities, it appeared that except for Seoul Women's College of Nursing with its rural district neighborhood type park, users were satisfied: with the small urban neighborhood park of Methodist Theological College, Konkuk University's small urban square park, and Sejong University's green space small city park. In general, users appeared to not have satisfaction with such features as fountains / hydroponic facilities, fitness facilities, and square facilities, which should be taken into consideration when pursuing further opening and greening projects. Regarding full satisfaction with the space environment, it was found that users were not satisfied with Seoul Women's College of Nursing's rural district neighborhood-style park, whereas they were satisfied with Methodist Theological College's small urban neighborhood park, Konkuk University's small urban square-style park, and Sejong University's green space small city park. In addition, it was shown that facilities use, convenience and privacy of the four parks were largely unsatisfactory for users, and that the small city parks located at roadsides were unsatisfactory regarding noise level, both of which should be most highly considered when conducting similar projects in the future.

Commissioning Experience of Tri-Cobalt-60 MRI-guided Radiation Therapy System (자기공명영상유도 Co-60 기반 방사선치료기기의 커미셔닝 경험)

  • Park, Jong Min;Park, So-Yeon;Wu, Hong-Gyun;Kim, Jung-in
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.193-200
    • /
    • 2015
  • The aim of this study is to present commissioning results of the ViewRay system. We verified safety functions of the ViewRay system. For imaging system, we acquired signal to noise ratio (SNR) and image uniformity. In addition, we checked spatial integrity of the image. Couch movement accuracy and coincidence of isocenters (radiation therapy system, imaging system and virtual isocneter) was verified. Accuracy of MLC positioing was checked. We performed reference dosimetry according to American Association of Physicists in Medicine (AAPM) Task Group 51 (TG-51) in water phantom for head 1 and 3. The deviations between measurements and calculation of percent depth dose (PDD) and output factor were evaluated. Finally, we performed gamma evaluations with a total of 8 IMRT plans as an end-to-end (E2E) test of the system. Every safety system of ViewRay operated properly. The values of SNR and Uniformity met the tolerance level. Every point within 10 cm and 17.5 cm radii about the isocenter showed deviations less than 1 mm and 2 mm, respectively. The average couch movement errors in transverse (x), longitudinal (y) and vertical (z) directions were 0.2 mm, 0.1 mm and 0.2 mm, respectively. The deviations between radiation isocenter and virtual isocenter in x, y and z directions were 0 mm, 0 mm and 0.3 mm, respectively. Those between virtual isocenter and imaging isocenter were 0.6 mm, 0.5 mm and 0.2 mm, respectively. The average MLC positioning errors were less than 0.6 mm. The deviations of output, PDDs between mesured vs. BJR supplement 25, PDDs between measured and calculated and output factors of each head were less than 0.5%, 1%, 1% and 2%, respectively. For E2E test, average gamma passing rate with 3%/3 mm criterion was $99.9%{\pm}0.1%$.

Closed Integral Form Expansion for the Highly Efficient Analysis of Fiber Raman Amplifier (라만증폭기의 효율적인 성능분석을 위한 라만방정식의 적분형 전개와 수치해석 알고리즘)

  • Choi, Lark-Kwon;Park, Jae-Hyoung;Kim, Pil-Han;Park, Jong-Han;Park, Nam-Kyoo
    • Korean Journal of Optics and Photonics
    • /
    • v.16 no.3
    • /
    • pp.182-190
    • /
    • 2005
  • The fiber Raman amplifier(FRA) is a distinctly advantageous technology. Due to its wider, flexible gain bandwidth, and intrinsically lower noise characteristics, FRA has become an indispensable technology of today. Various FRA modeling methods, with different levels of convergence speed and accuracy, have been proposed in order to gain valuable insights for the FRA dynamics and optimum design before real implementation. Still, all these approaches share the common platform of coupled ordinary differential equations(ODE) for the Raman equation set that must be solved along the long length of fiber propagation axis. The ODE platform has classically set the bar for achievable convergence speed, resulting exhaustive calculation efforts. In this work, we propose an alternative, highly efficient framework for FRA analysis. In treating the Raman gain as the perturbation factor in an adiabatic process, we achieved implementation of the algorithm by deriving a recursive relation for the integrals of power inside fiber with the effective length and by constructing a matrix formalism for the solution of the given FRA problem. Finally, by adiabatically turning on the Raman process in the fiber as increasing the order of iterations, the FRA solution can be obtained along the iteration axis for the whole length of fiber rather than along the fiber propagation axis, enabling faster convergence speed, at the equivalent accuracy achievable with the methods based on coupled ODEs. Performance comparison in all co-, counter-, bi-directionally pumped multi-channel FRA shows more than 102 times faster with the convergence speed of the Average power method at the same level of accuracy(relative deviation < 0.03dB).

A Study on the Field Data Applicability of Seismic Data Processing using Open-source Software (Madagascar) (오픈-소스 자료처리 기술개발 소프트웨어(Madagascar)를 이용한 탄성파 현장자료 전산처리 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.21 no.3
    • /
    • pp.171-182
    • /
    • 2018
  • We performed the seismic field data processing using an open-source software (Madagascar) to verify if it is applicable to processing of field data, which has low signal-to-noise ratio and high uncertainties in velocities. The Madagascar, based on Python, is usually supposed to be better in the development of processing technologies due to its capabilities of multidimensional data analysis and reproducibility. However, this open-source software has not been widely used so far for field data processing because of complicated interfaces and data structure system. To verify the effectiveness of the Madagascar software on field data, we applied it to a typical seismic data processing flow including data loading, geometry build-up, F-K filter, predictive deconvolution, velocity analysis, normal moveout correction, stack, and migration. The field data for the test were acquired in Gunsan Basin, Yellow Sea using a streamer consisting of 480 channels and 4 arrays of air-guns. The results at all processing step are compared with those processed with Landmark's ProMAX (SeisSpace R5000) which is a commercial processing software. Madagascar shows relatively high efficiencies in data IO and management as well as reproducibility. Additionally, it shows quick and exact calculations in some automated procedures such as stacking velocity analysis. There were no remarkable differences in the results after applying the signal enhancement flows of both software. For the deeper part of the substructure image, however, the commercial software shows better results than the open-source software. This is simply because the commercial software has various flows for de-multiple and provides interactive processing environments for delicate processing works compared to Madagascar. Considering that many researchers around the world are developing various data processing algorithms for Madagascar, we can expect that the open-source software such as Madagascar can be widely used for commercial-level processing with the strength of expandability, cost effectiveness and reproducibility.