• Title/Summary/Keyword: Gaussian

Search Result 4,303, Processing Time 0.028 seconds

Linearity Estimation of PET/CT Scanner in List Mode Acquisition (List Mode에서 PET/CT Scanner의 직선성 평가)

  • Choi, Hyun-Jun;Kim, Byung-Jin;Ito, Mikiko;Lee, Hong-Jae;Kim, Jin-Ui;Kim, Hyun-Joo;Lee, Jae-Sung;Lee, Dong-Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.1
    • /
    • pp.86-90
    • /
    • 2012
  • Purpose: Quantification of myocardial blood flow (MBF) using dynamic PET imaging has the potential to assess coronary artery disease. Rb-82 plays a key role in the clinical assessment of myocardial perfusion using PET. However, MBF could be overestimated due to the underestimation of left ventricular input function in the beginning of the acquisition when the scanner has non-linearity between count rate and activity concentration due to the scanner dead-time. Therefore, in this study, we evaluated the count rate linearity as a function of the activity concentration in PET data acquired in list mode. Materials & methods: A cylindrical phantom (diameter, 12 cm length, 10.5 cm) filled with 296 MBq F-18 solution and 800 mL of water was used to estimate the linearity of the Biograph 40 True Point PET/CT scanner. PET data was acquired with 10 min per frame of 1 bed duration in list mode for different activity concentration levels in 7 half-lives. The images were reconstructed by OSEM and FBP algorithms. Prompt, net true and random counts of PET data according to the activity concentration were measured. Total and background counts were measured by drawing ROI on the phantom images and linearity was measured using background correction. Results: The prompt count rates in list mode were linearly increased proportionally to the activity concentration. At a low activity concentration (<30 kBq/mL), the prompt net true and random count rates were increased with the activity concentration. At a high activity concentration (>30 kBq/mL), the increasing rate of the prompt net true rates was slightly decreased while the increasing rate of random counts was increased. There was no difference in the image intensity linearity between OSEM and FBP algorithms. Conclusion: The Biograph 40 True Point PET/CT scanner showed good linearity of count rate even at a high activity concentration (~370 kBq/mL).The result indicates that the scanner is useful for the quantitative analysis of data in heart dynamic studies using Rb-82, N-13, O-15 and F-18.

  • PDF

Birth Weight Distribution by Gestational Age in Korean Population : Using Finite Mixture Modle (우리나라 신생아의 재태 연령에 따른 출생체중의 정상치 : Finite Mixture Model을 이용하여)

  • Lee, Jung-Ju;Park, Chang Gi;Lee, Kwang-Sun
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.11
    • /
    • pp.1179-1186
    • /
    • 2005
  • Purpose : A universal standard of the birth weight for gestational age cannot be made since girth weight distribution varies with race and other sociodemographic factors. This report aims to establish the birth weight distribution curve by gestational age, specific for Korean live births. Methods : We used the national birth certificate data of all live births in Korea from January 2001 to December 2003; for live births with gestational ages 24 weeks to 44 weeks(n=1,509,763), we obtained mean birth weigh, standard deviation and 10th, 25th, 50th, 75th and 90th percentile values for each gestational age group by one week increment. Then, we investigated the birth weight distribution of each gestational age group by the normal Gaussian model. To establish final standard values of Korean birth weight distribution by gestational age, we used the finite mixture model to eliminate erroneous birth slights for respective gestational ages. Results : For gestational ages 28 weeks 32 weeks, birth weight distribution showed a biologically implausible skewed tail or bimodal distribution. Following correction of the erroneous distribution by using the finite mixture model, the constructed curve of birth weight distribution was compared to those of other studies. The Korean birth weight percentile values were generally lower than those for Norwegians and North Americans, particularly after 37 weeks of gestation. The Korean curve was similar to that of Lubchenco both 50th and 90th percentiles, but generally the Korean curve had higher 10th percentile values. Conclusion : This birth weight distribution curve by gestational age is based on the most recent and the national population data compared to previous studies in Korea. We hope that for Korean infants, this curve will help clinicians in defining and managing the large for gestational age infants and also for infants with intrauterine growth retardation.

Comparison of Sea Level Data from TOPEX/POSEIDON Altimeter and in-situ Tide Gauges in the East Asian Marginal Seas (동아시아 주변해역에서의 TOPEX/POSEIDON 고도 자료와 현장 해수면 자료의 비교)

  • Youn, Yong-Hoon;Kim, Ki-Hyun;Park, Young-Hyang;Oh, Im-Sang
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.5 no.4
    • /
    • pp.267-275
    • /
    • 2000
  • In an effort to assess the reliability of satellite altimeter system, we conducted a comparative analysis of sea level data that were collected using the TOPEX/POSEIDON (T/P) altimeter and the 10 tide gauge (TG) stations in the satellite passing track. The analysis was made using data sets collected from marginal sea regions surrounding the Korean Peninsula at T/P cycles of 2 to 230, which correspond to October 1992 to December 1998. Because of strong tidal activity in the study area, treatment of tidal errors is a very critical step in data processing. Hence in the computation of dynamic heights from the Tn data, we adapted the procedures of Park and Gamberoni (1995) to reduce errors associated with it. When these T/P data were treated, the alias periods of M$_2$, S$_2$, and K$_1$ constitutions were found at 62.1, 58.7, and 173 days. The compatibility of the T/P and TG data sets were examined at various filtering periods. The results indicate that the low-frequency signal of Tn data can be interpreted more safely with longer filtering periods (such as up to the maximum selected values of 200 days). When RMS errors for 200-day low-pass filter period was compared among the whole 10 tidal stations, the values spanned in the range of 2.8 to 6.7 cm. The results of correlation analysis at this filtering period also showed a strong agreement between the Tn and TG data sets over the whole stations investigated (e.g., P values consistently less than 0.0001). According to our analysis, we conclude that the analysis of surface sea level using satellite altimeter data can be made safely and reasonably long filtering periods such as 200 days.

  • PDF

Effect of Noise on Density Differences of Tissue in Computed Tomography (컴퓨터 단층촬영의 조직간 밀도차이에 대한 노이즈 영향)

  • Yang, Won Seok;Son, Jung Min;Chon, Kwon Su
    • Journal of the Korean Society of Radiology
    • /
    • v.12 no.3
    • /
    • pp.403-407
    • /
    • 2018
  • Currently, the highest cancer death rate in Korea is lung cancer, which is a typical cancer that is difficult to detect early. Low-dose chest CT is being used for early detection, which has a greater lung cancer diagnosis rate of about three times than regular chest x-ray images. However, low-dose chest CT not only significantly reduces image resolution but also has a weak signal and is sensitive to noise. Also, air filled lungs are low-density organs and the presence of noise can significantly affect early diagnosis of cancer. This study used Visual C++ to set a circle inside a large circle with a density of 2.0, with a density of 1.0, which is the density of water, in which five small circle of mathematics have different densities. Gaussian noise was generated by 1%, 2%, 3%, and 4% respectively to determine the effect of noise on the mean value, the standard deviation value, and the relative noise ratio(SNR). In areas where the density difference between the large and small circles was greatest in the event of 1 % noise, the SNR in the area with the greatest variation in noise was 4.669, and in areas with the lowest density difference, the SNR was 1.183. In addition, the SNR values can be seen to be high if the same results are obtained for both positive and negative densities. Quality was also clearly visible when the density difference was large, and if the noise level was increased, the SNR was reduced to significantly affect the noise. Low-density organs or organs in areas of similar density to cancers, will have significant noise effects, and the effects of density differences on the probability of noise will affect diagnosis.

Automatic Interpretation of Epileptogenic Zones in F-18-FDG Brain PET using Artificial Neural Network (인공신경회로망을 이용한 F-18-FDG 뇌 PET의 간질원인병소 자동해석)

  • 이재성;김석기;이명철;박광석;이동수
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.455-468
    • /
    • 1998
  • For the objective interpretation of cerebral metabolic patterns in epilepsy patients, we developed computer-aided classifier using artificial neural network. We studied interictal brain FDG PET scans of 257 epilepsy patients who were diagnosed as normal(n=64), L TLE (n=112), or R TLE (n=81) by visual interpretation. Automatically segmented volume of interest (VOI) was used to reliably extract the features representing patterns of cerebral metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

Study of Scatter Influence of kV-Conebeam CT Based Calculation for Pelvic Radiotherapy (골반 방사선 치료에서 산란이 kV-Conebeam CT 영상 기반의 선량계산에 미치는 영향에 대한 연구)

  • Yoon, KyoungJun;Kwak, Jungwon;Cho, Byungchul;Kim, YoungSeok;Lee, SangWook;Ahn, SeungDo;Nam, SangHee
    • Progress in Medical Physics
    • /
    • v.25 no.1
    • /
    • pp.37-45
    • /
    • 2014
  • The accuracy and uniformity of CT numbers are the main causes of radiation dose calculation error. Especially, for the dose calculation based on kV-Cone Beam Computed Tomography (CBCT) image, the scatter affecting the CT number is known to be quite different by the object sizes, densities, exposure conditions, and so on. In this study, the scatter impact on the CBCT based dose calculation was evaluated to provide the optimal condition minimizing the error. The CBCT images was acquired under three scatter conditions ("Under-scatter", "Over-scatter", and "Full-scatter") by adjusting amount of scatter materials around a electron density phantom (CIRS062, Tissue Simulation Technology, Norfolk, VA, USA). The CT number uniformities of CBCT images for water-equivalent materials of the phantom were assessed, and the location dependency, either "inner" or "outer" parts of the phantom, was also evaluated. The electron density correction curves were derived from CBCT images of the electron density phantom in each scatter condition. The electron density correction curves were applied to calculate the CBCT based doses, which were compared with the dose based on Fan Beam Computed Tomography (FBCT). Also, 5 prostate IMRT cases were enrolled to assess the accuracy of dose based on CBCT images using gamma index analysis and relative dose differences. As the CT number histogram of phantom CBCT images for water equivalent materials was fitted with a gaussian function, the FHWM (146 HU) for "Full-scatter" condition was the smallest among the FHWM for the three conditions (685 HU for "under scatter" and 264 HU for "over scatter"). Also, the variance of CT numbers was the smallest for the same ingredients located in the center and periphery of the phantom in the "Full-scatter" condition. The dose distributions calculated with FBCT and CBCT images compared in a gamma index evaluation of 1%/3 mm criteria and in the dose difference. With the electron density correction acquired in the same scatter condition, the CBCT based dose calculations tended to be the most accurate. In 5 prostate cases in which the mean equivalent diameter was 27.2 cm, the averaged gamma pass rate was 98% and the dose difference confirmed to be less than 2% (average 0.2%, ranged from -1.3% to 1.6%) with the electron density correction of the "Full-scatter" condition. The accuracy of CBCT based dose calculation could be confirmed that closely related to the CT number uniformity and to the similarity of the scatter conditions for the electron density correction curve and CBCT image. In pelvic cases, the most accurate dose calculation was achievable in the application of the electron density curves of the "Full-scatter" condition.

Development of Gated Myocardial SPECT Analysis Software and Evaluation of Left Ventricular Contraction Function (게이트 심근 SPECT 분석 소프트웨어의 개발과 좌심실 수축 기능 평가)

  • Lee, Byeong-Il;Lee, Dong-Soo;Lee, Jae-Sung;Chung, June-Key;Lee, Myung-Chul;Choi, Heung-Kook
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.2
    • /
    • pp.73-82
    • /
    • 2003
  • Objectives: A new software (Cardiac SPECT Analyzer: CSA) was developed for quantification of volumes and election fraction on gated myocardial SPECT. Volumes and ejection fraction by CSA were validated by comparing with those quantified by Quantitative Gated SPECT (QGS) software. Materials and Methods: Gated myocardial SPECT was peformed in 40 patients with ejection fraction from 15% to 85%. In 26 patients, gated myocardial SPECT was acquired again with the patients in situ. A cylinder model was used to eliminate noise semi-automatically and profile data was extracted using Gaussian fitting after smoothing. The boundary points of endo- and epicardium were found using an iterative learning algorithm. Enddiastolic (EDV) and endsystolic volumes (ESV) and election fraction (EF) were calculated. These values were compared with those calculated by QGS and the same gated SPECT data was repeatedly quantified by CSA and variation of the values on sequential measurements of the same patients on the repeated acquisition. Results: From the 40 patient data, EF, EDV and ESV by CSA were correlated with those by QGS with the correlation coefficients of 0.97, 0.92, 0.96. Two standard deviation (SD) of EF on Bland Altman plot was 10.1%. Repeated measurements of EF, EDV, and ESV by CSA were correlated with each other with the coefficients of 0.96, 0.99, and 0.99 for EF, EDV and ESV respectively. On repeated acquisition, reproducibility was also excellent with correlation coefficients of 0.89, 0.97, 0.98, and coefficient of variation of 8.2%, 5.4mL, 8.5mL and 2SD of 10.6%, 21.2mL, and 16.4mL on Bland Altman plot for EF, EDV and ESV. Conclusion: We developed the software of CSA for quantification of volumes and ejection fraction on gated myocardial SPECT. Volumes and ejection fraction quantified using this software was found valid for its correctness and precision.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.