• Title/Summary/Keyword: Gaussian kernel

Search Result 137, Processing Time 0.025 seconds

A Voxel-Based Morphometry of Gray Matter Reduction in Patients with Dementia of the Alzheimer's Type (화소 기반 형태분석 방법을 이용한 알츠하이머 치매환자의 회백질 용적감소의 정량적 분석)

  • Lim, Hyun-Kook;Choi, Eun-Hyung;Lee, Chang-Uk
    • Korean Journal of Biological Psychiatry
    • /
    • v.15 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Objectives : The purpose of this study was to find brain regions in which gray matter volume was reduced and to show the capability of voxel-based morphpmetry(VBM) analysis for lateralizing clinically significant brain regions in dementia of Alzheimer's type patients compared to healthy group. Methods : MR T1-weighted images of the 20 dementia of Alzheimer's type patients were compared with those of the 20 normal controls. Images were transformed to standard MNI space. In order to observe gray matter volume change. Gray matter was smoothed with a Gaussian kernel. After these preprocessing, statistical analysis was performed using statistical parametric mapping software(SPM2). Results : Gray matter volume was significantly reduced in the bilateral parahippocampal gyri, Lt. anterior cingulate gyrus, Lt. posterior cingulate gyrus, bilateral superior temporal gyri Lt. middle temporal gyrus, Lt. superior, bilateral middle, Rt. anterior frontal gyri and Rt. precuneus in dementia of Alzheimer's type patient group. Conclusions : These VBM results confirm previous findings of temporal lobe and limbic lobe atrophic changes in dementia of Alzheimer's type, and suggest that these abnormalities may be confined to specific sites within that lobe, rather than showing a widespread distribution.

  • PDF

Multi-thresholds Selection Based on Plane Curves (평면 곡선에 기반한 다중 임계값 결정)

  • Duan, Na;Seo, Suk-T.;Park, Hye-G.;Kwon, Soon-H.
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.2
    • /
    • pp.279-284
    • /
    • 2010
  • The plane curve approach which was proposed by Boukharouba et. al. is a multi-threshold selection method through searching peak-valley based on histogram cumulative distribution function. However the method is required to select parameters to compose plane curve, and the shape of plane curve is affected according to parameters. Therefore detection of peak-valley is effected by parameters. In this paper, we propose an entropy maximizing-based method to select optimal plane curve parameters, and propose a multi-thresholding method based on the selected parameters. The effectiveness of the proposed method is demonstrated by multi-thresholding experiments on various images and comparison with other conventional thresholding methods based on histogram.

Optimization of Material Properties for Coherent Behavior across Multi-resolution Cloth Models

  • Sung, Nak-Jun;Transue, Shane;Kim, Minsang;Choi, Yoo-Joo;Choi, Min-Hyung;Hong, Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.4072-4089
    • /
    • 2018
  • This paper introduces a scheme for optimizing the material properties of mass-spring systems of different resolutions to provide coherent behavior for reduced level-of-detail in MSS(Mass-Spring System) meshes. The global optimal material coefficients are derived to match the behavior of provided reference mesh. The proposed method also gives us insight into levels of reduction that we can achieve in the systematic behavioral coherency among the different resolution of MSS meshes. We obtain visually acceptable coherent behaviors for cloth models based on our proposed error metric and identify that this method can significantly reduce the resolution levels of simulated objects. In addition, we have confirmed coherent behaviors with different resolutions through various experimental validation tests. We analyzed spring force estimations through triangular Barycentric coordinates based from the reference MSS that uses a Gaussian kernel based distribution. Experimental results show that the displacement difference ratio of the node positions is less than 10% even if the number of nodes of $MSS^{sim}$ decreases by more than 50% compared with $MSS^{ref}$. Therefore, we believe that it can be applied to various fields that are requiring the real-time simulation technology such as VR, AR, surgical simulation, mobile game, and numerous other application domains.

Self-diagnostic system for smartphone addiction using multiclass SVM (다중 클래스 SVM을 이용한 스마트폰 중독 자가진단 시스템)

  • Pi, Su Young
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.13-22
    • /
    • 2013
  • Smartphone addiction has become more serious than internet addiction since people can download and run numerous applications with smartphones even without internet connection. However, smartphone addiction is not sufficiently dealt with in current studies. The S-scale method developed by Korea National Information Society Agency involves so many questions that respondents are likely to avoid the diagnosis itself. Moreover, since S-scale is determined by the total score of responded items without taking into account of demographic variables, it is difficult to get an accurate result. Therefore, in this paper, we have extracted important factors from all data, which affect smartphone addiction, including demographic variables. Then we classified the selected items with a neural network. The result of a comparative analysis with backpropagation learning algorithm and multiclass support vector machine shows that learning rate is slightly higher in multiclass SVM. Since multiclass SVM suggested in this paper is highly adaptable to rapid changes of data, we expect that it will lead to a more accurate self-diagnosis of smartphone addiction.

Study on the K-scale reflecting the confidence of survey responses (설문 응답에 대한 신뢰도를 반영한 K-척도에 관한 연구)

  • Park, Hye Jung;Pi, Su Young
    • Journal of the Korean Data and Information Science Society
    • /
    • v.24 no.1
    • /
    • pp.41-51
    • /
    • 2013
  • In the Information age, internet addiction has been a big issue in a modern society. The adverse effects of the internet addiction have been increasing at an exponential speed. Along with a great variety of internet-connected device supplies, K-scale diagnostic criteria have been used for the internet addiction self-diagnose tests in the high-speed wireless Internet service, netbooks, and smart phones, etc. The K-scale diagnostic criteria needed to be changed to meet the changing times, and the diagnostic criteria of K-scale was changed in March, 2012. In this paper, we analyze the internet addiction and K-scale features on the actual condition of Gyeongbuk collegiate areas using the revised K-scale diagnostic criteria in 2012. The diagnostic method on internet addiction is measured by the respondents' subjective estimation. Willful error of the respondents can be occurred to hide their truth. In this paper, we add the survey response to the trusted reliability values to reduce response errors on the K-scale on the K-scale, and enhance the reliability of the analysis.

Super Resolution using Dictionary Data Mapping Method based on Loss Area Analysis (손실 영역 분석 기반의 학습데이터 매핑 기법을 이용한 초해상도 연구)

  • Han, Hyun-Ho;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.3
    • /
    • pp.19-26
    • /
    • 2020
  • In this paper, we propose a method to analyze the loss region of the dictionary-based super resolution result learned for image quality improvement and to map the learning data according to the analyzed loss region. In the conventional learned dictionary-based method, a result different from the feature configuration of the input image may be generated according to the learning image, and an unintended artifact may occur. The proposed method estimate loss information of low resolution images by analyzing the reconstructed contents to reduce inconsistent feature composition and unintended artifacts in the example-based super resolution process. By mapping the training data according to the final interpolation feature map, which improves the noise and pixel imbalance of the estimated loss information using a Gaussian-based kernel, it generates super resolution with improved noise, artifacts, and staircase compared to the existing super resolution. For the evaluation, the results of the existing super resolution generation algorithms and the proposed method are compared with the high-definition image, which is 4% better in the PSNR (Peak Signal to Noise Ratio) and 3% in the SSIM (Structural SIMilarity Index).

A Method for Tree Image Segmentation Combined Adaptive Mean Shifting with Image Abstraction

  • Yang, Ting-ting;Zhou, Su-yin;Xu, Ai-jun;Yin, Jian-xin
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1424-1436
    • /
    • 2020
  • Although huge progress has been made in current image segmentation work, there are still no efficient segmentation strategies for tree image which is taken from natural environment and contains complex background. To improve those problems, we propose a method for tree image segmentation combining adaptive mean shifting with image abstraction. Our approach perform better than others because it focuses mainly on the background of image and characteristics of the tree itself. First, we abstract the original tree image using bilateral filtering and image pyramid from multiple perspectives, which can reduce the influence of the background and tree canopy gaps on clustering. Spatial location and gray scale features are obtained by step detection and the insertion rule method, respectively. Bandwidths calculated by spatial location and gray scale features are then used to determine the size of the Gaussian kernel function and in the mean shift clustering. Furthermore, the flood fill method is employed to fill the results of clustering and highlight the region of interest. To prove the effectiveness of tree image abstractions on image clustering, we compared different abstraction levels and achieved the optimal clustering results. For our algorithm, the average segmentation accuracy (SA), over-segmentation rate (OR), and under-segmentation rate (UR) of the crown are 91.21%, 3.54%, and 9.85%, respectively. The average values of the trunk are 92.78%, 8.16%, and 7.93%, respectively. Comparing the results of our method experimentally with other popular tree image segmentation methods, our segmentation method get rid of human interaction and shows higher SA. Meanwhile, this work shows a promising application prospect on visual reconstruction and factors measurement of tree.

236U accelerator mass spectrometry with a time-of-flight and energy detection system

  • Li Zheng;Hiroyuki Matsuzaki;Takeyasu Yamagata
    • Nuclear Engineering and Technology
    • /
    • v.54 no.12
    • /
    • pp.4636-4643
    • /
    • 2022
  • A time-of-flight and energy (TOF-E) detection system for the measurement of 236U accelerator mass spectrometry (AMS) has been developed to improve the 236U/238U sensitivity at Micro Analysis Laboratory, Tandem accelerator (MALT), The University of Tokyo. With observing TOF distribution of 235U, 236U and 238U, this TOF-E detection system has clearly separated 236U from the interference of 235U and 238U when measuring three kinds of uranium standards. In addition, we have developed a novel method combining kernel-based density estimation method and multi-Gaussian fitting method to estimate the 236U/238U sensitivity of the TOF-E detection system. Using this new estimation method, 3.4 × 10-12 of 236U/238U sensitivity and 1.9 ns of time resolution are obtained. 236U/238U sensitivity of TOF-E detection system has improved two orders of magnitude better than that of previous gas ionization chamber. Moreover, unknown species other than uranium isotopes were also observed in the measurement of a surface soil sample, which has demonstrated that TOF-E detection system has a higher sensitivity in particle identification. With its high sensibility in mass determination, this TOF-E detection system could also be used in other heavy isotope AMS.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Comparison of Survival Prediction of Rats with Hemorrhagic Shocks Using Artificial Neural Network and Support Vector Machine (출혈성 쇼크를 일으킨 흰쥐에서 인공신경망과 지원벡터기계를 이용한 생존율 비교)

  • Jang, Kyung-Hwan;Yoo, Tae-Keun;Nam, Ki-Chang;Choi, Jae-Rim;Kwon, Min-Kyung;Kim, Deok-Won
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.2
    • /
    • pp.47-55
    • /
    • 2011
  • Hemorrhagic shock is a cause of one third of death resulting from injury in the world. Early diagnosis of hemorrhagic shock makes it possible for physician to treat successfully. The objective of this paper was to select an optimal classifier model using physiological signals from rats measured during hemorrhagic experiment. This data set was used to train and predict survival rate using artificial neural network (ANN) and support vector machine (SVM). To avoid over-fitting, we chose the best classifier according to performance measured by a 10-fold cross validation method. As a result, we selected ANN having three hidden nodes with one hidden layer and SVM with Gaussian kernel function as trained prediction model, and the ANN showed 88.9 % of sensitivity, 96.7 % of specificity, 92.0 % of accuracy and the SVM provided 97.8 % of sensitivity, 95.0 % of specificity, 96.7 % of accuracy. Therefore, SVM was better than ANN for survival prediction.