• Title/Summary/Keyword: Iterative analysis

Search Result 1,136, Processing Time 0.024 seconds

Evaluation of Image Noise and Radiation Dose Analysis In Brain CT Using ASIR(Adaptive Statistical Iterative Reconstruction) (ASIR를 이용한 두부 CT의 영상 잡음 평가 및 피폭선량 분석)

  • Jang, Hyon-Chol;Kim, Kyeong-Keun;Cho, Jae-Hwan;Seo, Jeong-Min;Lee, Haeng-Ki
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.5
    • /
    • pp.357-363
    • /
    • 2012
  • The purpose of this study on head computed tomography scan corporate reorganization adaptive iteration algorithm using the statistical noise, and quality assessment, reduction of dose was evaluated. Head CT examinations do not apply ASIR group [A group], ASIR 50 applies a group [B group] were divided into examinations. B group of each 46.9 %, 48.2 %, 43.2 %, and 47.9 % the measured in the phantom research result of measurement of CT noise average were reduced more than A group in the central part (A) and peripheral unit (B, C, D). CT number was measured with the quantitive analytical method in the display-image quality evaluation and about noise was analyze. There was A group and difference which the image noise notes statistically between B. And A group was high so that the image noise could note than B group (31.87 HUs, 31.78 HUs, 26.6 HUs, 30.42 HU P<0.05). The score of the observer 1 of A group evaluated 73.17 on 74.2 at the result 80 half tone dot of evaluating by the qualitative evaluation method of the image by the bean curd clinical image evaluation table. And the score of the observer 1 of B group evaluated 71.77 on 72.47. There was no difference (P>0.05) noted statistically. And the inappropriate image was shown to the diagnosis. As to the exposure dose, by examination by applying ASIR 50 % there was no decline in quality of the image, 47.6 % could reduce the radiation dose. In conclusion, if ASIR is applied to the clinical part, it is considered with the dose written much more that examination is possible. And when examination, it is considered that it becomes the positive factor when the examiner determines.

Development of Gated Myocardial SPECT Analysis Software and Evaluation of Left Ventricular Contraction Function (게이트 심근 SPECT 분석 소프트웨어의 개발과 좌심실 수축 기능 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.73-82
    • /
    • 2003
  • Objectives: A new software (Cardiac SPECT Analyzer: CSA) was developed for quantification of volumes and election fraction on gated myocardial SPECT. Volumes and ejection fraction by CSA were validated by comparing with those quantified by Quantitative Gated SPECT (QGS) software. Materials and Methods: Gated myocardial SPECT was peformed in 40 patients with ejection fraction from 15% to 85%. In 26 patients, gated myocardial SPECT was acquired again with the patients in situ. A cylinder model was used to eliminate noise semi-automatically and profile data was extracted using Gaussian fitting after smoothing. The boundary points of endo- and epicardium were found using an iterative learning algorithm. Enddiastolic (EDV) and endsystolic volumes (ESV) and election fraction (EF) were calculated. These values were compared with those calculated by QGS and the same gated SPECT data was repeatedly quantified by CSA and variation of the values on sequential measurements of the same patients on the repeated acquisition. Results: From the 40 patient data, EF, EDV and ESV by CSA were correlated with those by QGS with the correlation coefficients of 0.97, 0.92, 0.96. Two standard deviation (SD) of EF on Bland Altman plot was 10.1%. Repeated measurements of EF, EDV, and ESV by CSA were correlated with each other with the coefficients of 0.96, 0.99, and 0.99 for EF, EDV and ESV respectively. On repeated acquisition, reproducibility was also excellent with correlation coefficients of 0.89, 0.97, 0.98, and coefficient of variation of 8.2%, 5.4mL, 8.5mL and 2SD of 10.6%, 21.2mL, and 16.4mL on Bland Altman plot for EF, EDV and ESV. Conclusion: We developed the software of CSA for quantification of volumes and ejection fraction on gated myocardial SPECT. Volumes and ejection fraction quantified using this software was found valid for its correctness and precision.

Performance Test of Hypocenter Determination Methods under the Assumption of Inaccurate Velocity Models: A case of surface microseismic monitoring (부정확한 속도 모델을 가정한 진원 결정 방법의 성능평가: 지표면 미소지진 모니터링 사례)

  • Woo, Jeong-Ung;Rhie, Junkee;Kang, Tae-Seob
    • Geophysics and Geophysical Exploration
    • /
    • v.19 no.1
    • /
    • pp.1-10
    • /
    • 2016
  • The hypocenter distribution of microseismic events generated by hydraulic fracturing for shale gas development provides essential information for understanding characteristics of fracture network. In this study, we evaluate how inaccurate velocity models influence the inversion results of two widely used location programs, hypoellipse and hypoDD, which are developed based on an iterative linear inversion. We assume that 98 stations are densely located inside the circle with a radius of 4 km and 5 artificial hypocenter sets (S0 ~ S4) are located from the center of the network to the south with 1 km interval. Each hypocenter set contains 25 events placed on the plane. To quantify accuracies of the inversion results, we defined 6 parameters: difference between average hypocenters of assumed and inverted locations, $d_1$; ratio of assumed and inverted areas estimated by hypocenters, r; difference between dip of the reference plane and the best fitting plane for determined hypocenters, ${\theta}$; difference between strike of the reference plane and the best fitting plane for determined hypocenters, ${\phi}$; root-mean-square distance between hypocenters and the best fitting plane, $d_2$; root-mean-square error in horizontal direction on the best fitting plane, $d_3$. Synthetic travel times are calculated for the reference model having 1D layered structure and the inaccurate velocity model for the inversion is constructed by using normal distribution with standard deviations of 0.1, 0.2, and 0.3 km/s, respectively, with respect to the reference model. The parameters $d_1$, r, ${\theta}$, and $d_2$ show positive correlation with the level of velocity perturbations, but the others are not sensitive to the perturbations except S4, which is located at the outer boundary of the network. In cases of S0, S1, S2, and S3, hypoellipse and hypoDD provide similar results for $d_1$. However, for other parameters, hypoDD shows much better results and errors of locations can be reduced by about several meters regardless of the level of perturbations. In light of the purpose to understand the characteristics of hydraulic fracturing, $1{\sigma}$ error of velocity structure should be under 0.2 km/s in hypoellipse and 0.3 km/s in hypoDD.

The Analysis of Cost Structure and Productivity in the Korea and Japan Railroad Industry (한국과 일본 철도산업의 비용구조와 생산성 분석)

  • Park, Jin-Gyeong;Kim, Seong-Su
    • Journal of Korean Society of Transportation
    • /
    • v.24 no.2 s.88
    • /
    • pp.65-78
    • /
    • 2006
  • This paper investigates the cost structure ot the Korea and Japan railroad industry with respect to density, scale and scope economies as well as productivity growth rate using a generalized trans)og multiproduct cost function model. The paper then assumes that the Korea and Japan railway companies pi·educe three outputs (incumbent railway passenger-kilometers. Shinkansen passenger-kilometers, ton-kilometers of freight) using four input factors (labor, fuel, maintenance, rolling stock and capital). The specified cost function includes foul other independent variables: track lengths to reflect network effects, two dummies to reflect nation and ownership effects, and time trend as a proxy for technical change. The simultaneous equation system consisting of a cost function and three input share equations is estimated with the Zellner's iterative seemingly unrelated regression. The unbalanced panel data used in the paper, a total of 154 observations. are collected from the annual records of the Korea National Railroad (KNR) for the yews $1977{\sim}2003$, Japan National Railways (JNR) for the years $1977{\sim}1984$. seven Japan Railways (JR's) for the years $1987{\sim}2003$. The findings show that the Korean and Japanese railways exhibit product-specific and overall economies of density but product-specific diseconomies of scale with respect to incumbent railway passenger-kilometers, Shinkansen-kilometers and ton-kilometers. However, the railways experience mild overall economies of scale which result from economies of scope associated with the joint production of incumbent railway/Shinkansen and feight, freight/incumbent railway and Shinkansen except Shinkansen/incumbent railway and freight. In addition, the economies of density and scale in the KNR, JR east, JR central, and JR west companies at the point of the years $1990{\sim}2003$ average is generally analogous to the above results at the point of sample average. There also appear to be economies of ssope associated with the joint Production of the incumbent railway and Shinkansen in JR central but diseconomies of scope in JR East and JR West. The findings also indicate that the productivity growth rate of the privately-owned JR's is larger than that of the government-owned KNR.

The Evaluation of Attenuation Difference and SUV According to Arm Position in Whole Body PET/CT (전신 PET/CT 검사에서 팔의 위치에 따른 감약 정도와 SUV 변화 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Suk, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.21-25
    • /
    • 2010
  • Purpose: For better PET imaging with accuracy the transmission scanning is inevitably required for attenuation correction. The attenuation is affected by condition of acquisition and patient position, consequently quantitative accuracy may be decreased in emission scan imaging. In this paper, the present study aims at providing the measurement for attenuation varying with the positions of the patient's arm in whole body PET/CT, further performing the comparative analysis over its SUV changes. Materials and Methods: NEMA 1994 PET phantom was filled with $^{18}F$-FDG and the concentration ratio of insert cylinder and background water fit to 4:1. Phantom images were acquired through emission scanning for 4min after conducting transmission scanning by using CT. In an attempt to acquire image at the state that the arm of the patient was positioned at the lower of ahead, image was acquired in away that two pieces of Teflon inserts were used additionally by fixing phantoms at both sides of phantom. The acquired imaged at a were reconstructed by applying the iterative reconstruction method (iteration: 2, subset: 28) as well as attenuation correction using the CT, and then VOI was drawn on each image plane so as to measure CT number and SUV and comparatively analyze axial uniformity (A.U=Standard deviation/Average SUV) of PET images. Results: It was found from the above phantom test that, when comparing two cases of whether Teflon insert was fixed or removed, the CT number of cylinder increased from -5.76 HU to 0 HU, while SUV decreased from 24.64 to 24.29 and A.U from 0.064 to 0.052. And the CT number of background water was identified to increase from -6.14 HU to -0.43 HU, whereas SUV decreased from 6.3 to 5.6 and A.U also decreased from 0.12 to 0.10. In addition, as for the patient image, CT number was verified to increase from 53.09 HU to 58.31 HU and SUV decreased from 24.96 to 21.81 when the patient's arm was positioned over the head rather than when it was lowered. Conclusion: When arms up protocol was applied, the SUV of phantom and patient image was decreased by 1.4% and 9.2% respectively. With the present study it was concluded that in case of PET/CT scanning against the whole body of a patient the position of patient's arm was not so much significant. Especially, the scanning under the condition that the arm is raised over to the head gives rise to more probability that the patient is likely to move due to long scanning time that causes the increase of uptake of $^{18}F$-FDG of brown fat at the shoulder part together with increased pain imposing to the shoulder and discomfort to a patient. As regarding consideration all of such factors, it could be rationally drawn that PET/CT scanning could be made with the arm of the subject lowered.

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF