• Title/Summary/Keyword: nonparametric statistical method

Search Result 190, Processing Time 0.035 seconds

List-event Data Resampling for Quantitative Improvement of PET Image (PET 영상의 정량적 개선을 위한 리스트-이벤트 데이터 재추출)

  • Woo, Sang-Keun;Ju, Jung Woo;Kim, Ji Min;Kang, Joo Hyun;Lim, Sang Moo;Kim, Kyeong Min
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.309-316
    • /
    • 2012
  • Multimodal-imaging technique has been rapidly developed for improvement of diagnosis and evaluation of therapeutic effects. In despite of integrated hardware, registration accuracy was decreased due to a discrepancy between multimodal image and insufficiency of count in accordance with different acquisition method of each modality. The purpose of this study was to improve the PET image by event data resampling through analysis of data format, noise and statistical properties of small animal PET list data. Inveon PET listmode data was acquired as static data for 10 min after 60 min of 37 MBq/0.1 ml $^{18}F$-FDG injection via tail vein. Listmode data format was consist of packet containing 48 bit in which divided 8 bit header and 40 bit payload space. Realigned sinogram was generated from resampled event data of original listmode by using adjustment of LOR location, simple event magnification and nonparametric bootstrap. Sinogram was reconstructed for imaging using OSEM 2D algorithm with 16 subset and 4 iterations. Prompt coincidence was 13,940,707 count measured from PET data header and 13,936,687 count measured from analysis of list-event data. In simple event magnification of PET data, maximum was improved from 1.336 to 1.743, but noise was also increased. Resampling efficiency of PET data was assessed from de-noised and improved image by shift operation of payload value of sequential packet. Bootstrap resampling technique provides the PET image which noise and statistical properties was improved. List-event data resampling method would be aid to improve registration accuracy and early diagnosis efficiency.

Investigation of light stimulated mouse brain activation in high magnetic field fMRI using image segmentation methods

  • Kim, Wook;Woo, Sang-Keun;Kang, Joo Hyun;Lim, Sang Moo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.12
    • /
    • pp.11-18
    • /
    • 2016
  • Magnetic resonance image (MRI) is widely used in brain research field and medical image. Especially, non-invasive brain activation acquired image technique, which is functional magnetic resonance image (fMRI) is used in brain study. In this study, we investigate brain activation occurred by LED light stimulation. For investigate of brain activation in experimental small animal, we used high magnetic field 9.4T MRI. Experimental small animal is Balb/c mouse, method of fMRI is using echo planar image (EPI). EPI method spend more less time than any other MRI method. For this reason, however, EPI data has low contrast. Due to the low contrast, image pre-processing is very hard and inaccuracy. In this study, we planned the study protocol, which is called block design in fMRI research field. The block designed has 8 LED light stimulation session and 8 rest session. All block is consist of 6 EPI images and acquired 1 slice of EPI image is 16 second. During the light session, we occurred LED light stimulation for 1 minutes 36 seconds. During the rest session, we do not occurred light stimulation and remain the light off state for 1 minutes 36 seconds. This session repeat the all over the EPI scan time, so the total spend time of EPI scan has almost 26 minutes. After acquired EPI data, we performed the analysis of this image data. In this study, we analysis of EPI data using statistical parametric map (SPM) software and performed image pre-processing such as realignment, co-registration, normalization, smoothing of EPI data. The pre-processing of fMRI data have to segmented using this software. However this method has 3 different method which is Gaussian nonparametric, warped modulate, and tissue probability map. In this study we performed the this 3 different method and compared how they can change the result of fMRI analysis results. The result of this study show that LED light stimulation was activate superior colliculus region in mouse brain. And the most higher activated value of segmentation method was using tissue probability map. this study may help to improve brain activation study using EPI and SPM analysis.

Evaluation of Reference Intervals of Some Selected Chemistry Parameters using Bootstrap Technique in Dogs (Bootstrap 기법을 이용한 개의 혈청검사 일부 항목의 참고범위 평가)

  • Kim, Eu-Tteum;Pak, Son-Il
    • Journal of Veterinary Clinics
    • /
    • v.24 no.4
    • /
    • pp.509-513
    • /
    • 2007
  • Parametric and nonparametric coupled with bootstrap simulation technique were used to reevaluate previously defined reference intervals of serum chemistry parameters. A population-based study was performed in 100 clinically healthy dogs that were retrieved from the medical records of Kangwon National University Animal Hospital during 2005-2006. Data were from 52 males and 48 females(1 to 8 years old, 2.2-5.8 kg of body weight). Chemistry parameters examined were blood urea nitrogen(BUN)(mg/dl), cholesterol(mg/dl), calcium(mg/dl), aspartate aminotransferase(AST)(U/L), alanine aminotransferase(ALT)(U/L), alkaline phosphatase(ALP)(U/L), and total protein(g/dl), and were measured by Ektachem DT 60 analyzer(Johnson & Johnson). All but calcium were highly skewed distributions. Outliers were commonly identified particularly in enzyme parameters, ranging 5-9% of the samples and the remaining were only 1-2%. Regardless of distribution type of each analyte, nonparametric methods showed better estimates for use in clinical chemistry compare to parametric methods. The mean and reference intervals estimated by nonparametric bootstrap methods of BUN, cholesterol, calcium, AST, ALT, ALP, and total protein were 14.7(7.0-24.2), 227.3(120.7-480.8), 10.9(8.1-12.5), 25.4(11.8-66.6), 25.5(11.7-68.9), 87.7(31.1-240.8), and 6.8(5.6-8.2), respectively. This study indicates that bootstrap methods could be a useful statistical method to establish population-based reference intervals of serum chemistry parameters, as it is often the case that many laboratory values do not confirm to a normal distribution. In addition, the results emphasize on the confidence intervals of the analytical parameters showing distribution-related variations.

Effects of Goal Management Training According to Bilateral Activities of Autism Spectrum Disorders: Pilot Study (자폐스펙트럼 장애 아동에게 목표관리 훈련이 양측활동에 미치는 영향: 예비연구)

  • Ahn, Si-Nae
    • Journal of Korean Society of Neurocognitive Rehabilitation
    • /
    • v.10 no.2
    • /
    • pp.1-7
    • /
    • 2018
  • The study has compared normally developed children and children who were diagnosed as autism spectrum in goal management training them to observe the effect. The research was conducted to four normally developed children and four children who were diagnosed as autism spectrum, and all subjects were provided with identical goal management training. The children and the caregivers have selected desired objectives activity, and all three activities were followed by goal management training. Intervention periods were conducted ten times in total, two times a week for five weeks, and eight subjects in the two groups were trained one on one by the researcher. The training time was 40 minutes for every session. The descriptive statistics and frequency analysis were used as the statistical method, and the Mann-Whitney test, the nonparametric statistical analysis, was conducted to compare the difference between the two groups. Goal management training for two groups did not show a statistically significant difference in terms of the performance status of Canadian Occupational Performance Measure (p>.05). In the summary of Bruininks-Oseretsky Test of Motor Proficiency (2nd) which evaluates the motor skill, there was a statistically significant difference between the autism spectrum disorder group and normal group (p<.05). Additionally, the two groups showed a statistically significant difference in eye-hand coordination sub-test among Developmental Test of Visual Perception (2nd) which evaluates the visual perception performance (p<.05). The research has confirmed the applicability of goal management training to children with autism spectrum compared to the normally developed children, and it has confirmed the effectiveness of the training.

Analysis of Noise Influence on a Chaotic Series and Application of Filtering Techniques (카오스 시계열에 대한 잡음영향 분석과 필터링 기법의 적용)

  • Choi, Min Ho;Lee, Eun Tae;Kim, Hung Soo;Kim, Soo Jun
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.31 no.1B
    • /
    • pp.37-45
    • /
    • 2011
  • We studied noise influence on nonlinear chaotic system by using Logistic data series which is known as a typical nonlinear chaotic system. We regenerated Logistic data series by the method of adding noise according to noise level. And, we performed some analyses such as phase space reconstruction, correlation dimension, BDS statistics, and DVS Algorithms which are known as the methods of nonlinear deterministic or chaotic analysis. If we see the results of analysis, the characteristics of data series are gradually changed from nonlinear chaotic data series to random stochastic data series according to increasing noise level. We applied Low Pass Filter (LPF) and Kalman Filter techniques for the investigation of removing effect of the added noise to data series. Typical nonparametric method cannot distinguish nonlinear random series but the BDS statistic can distinguish the nonlinear randomness of the time series. Therefore this study used the BDS statistic which is well known as nonlinear statistical method for the investigation of randomness of time series for the effect of removing noise of data series. We found that Kalman filter is better method to remove the noise of chaotic data series even for high noise level.

Identification of Single Nucleotide Polymorphism of H-FABP Gene and Its Association with Fatness Traits in Chickens

  • Wang, Yan;Shu, Dingming;Li, Liang;Qu, Hao;Yang, Chunfen;Zhu, Qing
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.20 no.12
    • /
    • pp.1812-1819
    • /
    • 2007
  • Heart fatty acid-binding protein gene (H-FABP) is an important candidate gene for meat quality. One of the objectives of this study was to screen single nucleotide polymorphisms (SNP) of chicken H-FABP gene among 252 individuals that included 4 Chinese domestic chicken breeds (Fengkai Xinghua (T04), Huiyang Huxu (H), Qingyuan Ma (Q), Guangxi Xiayan (S1)), 2 breeds developed by the Institute of Animal Science, Guangdong Academy of Agricultural Sciences (Lingnan Huang (DC), dwarf chicken (E4)) and one introduced broiler (Abor Acre (AA)). Another objective of this study was to analyze the associations between polymorphisms of the H-FABP gene and fat deposition traits in chickens. PCR-SSCP was used to analyze SNPs in H-FABP and 4 SNPs (T260C, G675A, C783T and G2778A) were detected. Associations between polymorphic loci and intramuscular fat (IMF), abdominal fat weight (AFW) and abdominal fat percentage (AFP) were analyzed by ANCOVA method. The results showed that the T260C genotypes were significantly associated with IMF (p = 0.0233) and AFP (p = 0.0001); the G675A genotypes were significantly associated with AFW, AFP (p<0.01) and IMF (p<0.05); at the C783T locus, AFW and AFP differed highly between genotypes. However, the G2778A loci did not show any significant effect on fat deposition traits in this study. In addition, we found that there were some differences between AFP and definite haplotypes through a nonparametric statistical method, so the haplotypes based on the SNPs except G2778A loci were also significantly associated with IMF, AFW (g) (p<0.05) and AFP (%) (p<0.001). Significantly and suggestively dominant effects of H4H4 haplotype were observed for IMF and the H2H3 was dominant for AFW (g) and AFP (%). The results also revealed that H5H7 haplotype had a negative effect on IMF, while the H5H6 had a positive effect on AFW (g) and AFP (%).

Classical testing based on B-splines in functional linear models (함수형 선형모형에서의 B-스플라인에 기초한 검정)

  • Sohn, Jihoon;Lee, Eun Ryung
    • The Korean Journal of Applied Statistics
    • /
    • v.32 no.4
    • /
    • pp.607-618
    • /
    • 2019
  • A new and interesting task in statistics is to effectively analyze functional data that frequently comes from advances in modern science and technology in areas such as meteorology and biomedical sciences. Functional linear regression with scalar response is a popular functional data analysis technique and it is often a common problem to determine a functional association if a functional predictor variable affects the scalar response in the models. Recently, Kong et al. (Journal of Nonparametric Statistics, 28, 813-838, 2016) established classical testing methods for this based on functional principal component analysis (of the functional predictor), that is, the resulting eigenfunctions (as a basis). However, the eigenbasis functions are not generally suitable for regression purpose because they are only concerned with the variability of the functional predictor, not the functional association of interest in testing problems. Additionally, eigenfunctions are to be estimated from data so that estimation errors might be involved in the performance of testing procedures. To circumvent these issues, we propose a testing method based on fixed basis such as B-splines and show that it works well via simulations. It is also illustrated via simulated and real data examples that the proposed testing method provides more effective and intuitive results due to the localization properties of B-splines.

Effects of thickness and background on the masking ability of high-trasnlucent zirconias (고투명도 지르코니아의 두께 및 하부 배경에 따른 색조 차단 효과)

  • Kim, Young-Gon;Jung, Ji-Hye;Kong, Hyun-Jun;Kim, Yu-Lee
    • Journal of Dental Rehabilitation and Applied Science
    • /
    • v.37 no.4
    • /
    • pp.199-208
    • /
    • 2021
  • Purpose: The purpose of this study was to compare and evaluate the masking ability of three types of high translucent zirconia according to the various thicknesses and backgrounds. Materials and Methods: Using three types of high-translucency zirconia (Ceramill zolid fx white, Ceramill zolid ht+ white, Ceramill zolid ht+ preshade A2), 10 cylindrical specimens were fabricated in 10mm diameter and each with four thicknesses (0.6 mm, 1.0 mm, 1.5 mm, 2.0 mm), respectively by CAD/CAM method. The background was 10 mm in diameter and 10 mm in thickness. A1, A2, A3 flowable resin backgrounds, blue-colored core resin background, and Ni-Cr alloy background were prepared, and black, white backgrounds provided by the spectrophotometer manufacturer (x-rite, Koblach, Austria) were used. zirconia specimens and the background specimen were stacked to measure L, a*, b* with Spectrophotometer (Color i5, x-rite, Koblach, Austria) and the ΔE value with the other background is calculated. The Calculated mean ΔE values were compared based on perceptibility threshold 1.0 and acceptability threshold 3.7. Nonparametric tests such as Kruskal-Wallis test were performed to verify statistical significance (α = 0.05). Results: There was a significant difference in the mean ΔE value according to the zirconia type, background and thickness change (P = 0.000). Conclusion: According to the results of this study, the pre-colored high-translucent zirconia can obtain the desired zirconia shade when it is restored on teeth, composite resins, and abutments except for the blue resin core.

Using noise filtering and sufficient dimension reduction method on unstructured economic data (노이즈 필터링과 충분차원축소를 이용한 비정형 경제 데이터 활용에 대한 연구)

  • Jae Keun Yoo;Yujin Park;Beomseok Seo
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.119-138
    • /
    • 2024
  • Text indicators are increasingly valuable in economic forecasting, but are often hindered by noise and high dimensionality. This study aims to explore post-processing techniques, specifically noise filtering and dimensionality reduction, to normalize text indicators and enhance their utility through empirical analysis. Predictive target variables for the empirical analysis include monthly leading index cyclical variations, BSI (business survey index) All industry sales performance, BSI All industry sales outlook, as well as quarterly real GDP SA (seasonally adjusted) growth rate and real GDP YoY (year-on-year) growth rate. This study explores the Hodrick and Prescott filter, which is widely used in econometrics for noise filtering, and employs sufficient dimension reduction, a nonparametric dimensionality reduction methodology, in conjunction with unstructured text data. The analysis results reveal that noise filtering of text indicators significantly improves predictive accuracy for both monthly and quarterly variables, particularly when the dataset is large. Moreover, this study demonstrated that applying dimensionality reduction further enhances predictive performance. These findings imply that post-processing techniques, such as noise filtering and dimensionality reduction, are crucial for enhancing the utility of text indicators and can contribute to improving the accuracy of economic forecasts.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.