• Title/Summary/Keyword: statistical parameter extraction

Search Result 20, Processing Time 0.026 seconds

Synthesis and characterization of poly(vinyl-alcohol)-poly(β-cyclodextrin) copolymer membranes for aniline extraction

  • Oughlis-Hammache, F.;Skiba, M.;Hallouard, F.;Moulahcene, L.;Kebiche-Senhadji, O.;Benamor, M.;Lahiani-Skiba, M.
    • Membrane and Water Treatment
    • /
    • v.7 no.3
    • /
    • pp.223-240
    • /
    • 2016
  • In this study, poly(vinyl-alcohol) and water insoluble ${\beta}$-cyclodextrin polymer (${\beta}$-CDP) cross-linked with citric acid, have been used as macrocyclic carrier in the preparation of polymer inclusion membranes (PIMs) for aniline (as molecule model) extraction from aqueous media. The obtained membranes were firstly characterized by X-ray diffraction, Fourier transform infrared and water swelling test. The transport of aniline was studied in a two-compartment transport cell under various experimental conditions, such as carrier content in the membranes, stirring rate and initial aniline concentration. The kinetic study was performed and the kinetic parameters were calculated as rate constant (k), permeability coefficient (P) and flux (J). These first results demonstrated the utility of such polymeric membranes for environmental decontamination of toxic organic molecules like aniline. Predictive modeling of transport flux through these materials was then studied using design of experiments; the design chosen was a two level full factorial design $2^k$. An empirical correlation between aniline transport flux and independent variables (Poly ${\beta}$-CD membrane content, agitation speed and initial aniline concentration) was successfully obtained. Statistical analysis showed that initial aniline concentration of the solution was the most important parameter in the study domain. The model revealed the existence of a strong interaction between the Poly ${\beta}$-CD membrane content and the stirring speed of the source solution. The good agreement between the model and the experimental transport data confirms the model's validity.

The Endocardial Boundary Detection based on Statistical Charact'eristics of Echocardiographic Image (초음파 영상의 통계적 특성에 근거한 심내벽 윤곽선 검출)

  • Won, Chul-Ho;Kim, Myoung-Nam;Cho, Jin-Ho
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.3
    • /
    • pp.365-372
    • /
    • 1996
  • The researches to acquire diagnostic parameters from ultrasonic images are advanced with the progress of the digital image processing technique. Especially, the detection of endocardial boundary is very important in ultrasonic images, because endocardial boundary is used as a clinical parameter to estimate both the cardiac area and the variation of cardiac volume. Various methods to detect cardiac boundary are proposed, but these are insufficient to detect boundary. In this paper, an algorithm that detects the endocardial boundary, expanding the cavity region from the center using statistical information, is proposed The value of mean and sty:nd, wd deviation in cavity region is lower than those in muscle re- gion. Therefore, if we define the multiplication of mean and standard deviation as homogeneous coefficient, it can lead to conclusion that the pixels with small variation of these coefficleno are cavity region, and extraction of endocardial boundary from cavity region is possible. The proposed method detected endocardial boundary more effectively than edge based or threshold based method and is robuster to noise than radial searching method that has high dependency for center position.

  • PDF

Automatic Extraction of Size for Low Contrast Defects of LCD Polarizing Film (Low Contrast 특성을 갖는 LCD 편광필름 결함의 크기 자동 검출)

  • Park, Duck-Chun;Joo, Hyo-Nam;Rew, Keun-Ho
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.5
    • /
    • pp.438-443
    • /
    • 2008
  • In this paper, segmenting and classifying low contrast defects on flat panel display is one of the key problems for automatic inspection system in practice. Problems become more complicated when the quality of acquired image is degraded by the illumination irregularity. Many algorithms are developed and implemented successfully for the defects segmentation. However, vision algorithms are inherently prone to be dependent on parameters to be set manually. In this paper, one morphological segmentation algorithm is chosen and a technique using frequency domain analysis of input images is developed for automatically selection the morphological parameter. An extensive statistical performance analysis is performed to compare the developed algorithms.

A New Approach to Fingerprint Detection Using a Combination of Minutiae Points and Invariant Moments Parameters

  • Basak, Sarnali;Islam, Md. Imdadul;Amin, M.R.
    • Journal of Information Processing Systems
    • /
    • v.8 no.3
    • /
    • pp.421-436
    • /
    • 2012
  • Different types of fingerprint detection algorithms that are based on extraction of minutiae points are prevalent in recent literature. In this paper, we propose a new algorithm to locate the virtual core point/centroid of an image. The Euclidean distance between the virtual core point and the minutiae points is taken as a random variable. The mean, variance, skewness, and kurtosis of the random variable are taken as the statistical parameters of the image to observe the similarities or dissimilarities among fingerprints from the same or different persons. Finally, we verified our observations with a moment parameter-based analysis of some previous works.

The implementation of children's automated formant setting by Praat scripting (Praat을 이용한 아동 포먼트 자동 세팅 스크립트 구현)

  • Park, Jiyeon;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.10 no.4
    • /
    • pp.1-10
    • /
    • 2018
  • This study introduces an automated Praat script allowing optimal formant analysis for children's vowels. Using Burg's algorithm in Praat, formants can be extracted by setting the maximum formant value and the number of formants. The optimal formant setting was determined by identifying the two conditions, F1 and F2, with minimum standard deviations. When applying the optimal formant setting determined by the script, the results of normality tests were not significant among all vowels except /e/ for the maximum formant value, and among the vowels /a/, /e/, /i/, /o/, /u/ and /ʌ/ for the number of formants. This indicates that when analyzing the formants of children's vowel sounds, the unilateral application of a parameter setting (the maximum formant value and the number of formants) to all vowels is problematic. The performance of the optimal formant setting script was evaluated along with 3 different algorithm in order to determine whether it properly extracts formants for children's vowels. To this end, Korean monophghongs of 6-year-old children were collected and the Praat scripts were applied to the data. Resultant Formant plots and statistical analysis showed that optimum_script and qtone_script, which links to the perceptual unit, performed very well in formant extraction compared to the remaining 2 scripts.

A Study on Correlation among Viewers by Medium based on KBS PIE-TV Index

  • Lee, Jong-Soo;Hamacher, Alaric;Kwon, Soonchul;Lee, Seunghyun
    • International journal of advanced smart convergence
    • /
    • v.6 no.4
    • /
    • pp.9-18
    • /
    • 2017
  • In order to respond to the ever-changing media environments in the era of smart and mobile technology, KBS has introduced and partially applied PIE-TV and PIE-nonTV modes that monitor the average number of viewers among the national population group by means of the sample household extraction method which is a traditional way of rating investigation. This study analyzes the correlation between the number of viewers of premiere, re-air broadcasting, and MPP channel programs and the number of OTT-based VOD viewers in reference to the data extracted from PIE-TV survey results. KBS conducted a survey for 3 months between June and August 2017 to measure the PIE-TV Index, based on which the above-mentioned correlation was analyzed with programs classified to entertainment, drama, and cultural programs. For data analysis, SPSS (Ver. 18.0 for Window, SPSS Inc, Chicago, IL, USA) was utilized. It was assumed that when p<0.05 in the confidence interval of 95%, statistically significance would be secured. Among the 30 subjects in the simple correlation analysis, the parameter was determined by the Person correlation coefficient and the non-parameter by the Spearman correlation coefficient. Analysis results are as below: (1) As the number of viewers of premier entertainment, drama, and cultural programs was larger, the number of VOD viewers was larger accordingly. (2) As for entertainment and drama programs, as the number of re-air broadcasting viewers was larger, the number of VOD viewers decreased accordingly. (2) As for entertainment and drama programs, as the number of MPP viewers was larger, the number of VOD viewers decreased accordingly. It is expected that this statistical data can be utilized for strategic planning of MPP channel lineups including terrestrial TV broadcasting, cable TV, etc.

Application of peak based-Bayesian statistical method for isotope identification and categorization of depleted, natural and low enriched uranium measured by LaBr3:Ce scintillation detector

  • Haluk Yucel;Selin Saatci Tuzuner;Charles Massey
    • Nuclear Engineering and Technology
    • /
    • v.55 no.10
    • /
    • pp.3913-3923
    • /
    • 2023
  • Todays, medium energy resolution detectors are preferably used in radioisotope identification devices(RID) in nuclear and radioactive material categorization. However, there is still a need to develop or enhance « automated identifiers » for the useful RID algorithms. To decide whether any material is SNM or NORM, a key parameter is the better energy resolution of the detector. Although masking, shielding and gain shift/stabilization and other affecting parameters on site are also important for successful operations, the suitability of the RID algorithm is also a critical point to enhance the identification reliability while extracting the features from the spectral analysis. In this study, a RID algorithm based on Bayesian statistical method has been modified for medium energy resolution detectors and applied to the uranium gamma-ray spectra taken by a LaBr3:Ce detector. The present Bayesian RID algorithm covers up to 2000 keV energy range. It uses the peak centroids, the peak areas from the measured gamma-ray spectra. The extraction features are derived from the peak-based Bayesian classifiers to estimate a posterior probability for each isotope in the ANSI library. The program operations were tested under a MATLAB platform. The present peak based Bayesian RID algorithm was validated by using single isotopes(241Am, 57Co, 137Cs, 54Mn, 60Co), and then applied to five standard nuclear materials(0.32-4.51% at.235U), as well as natural U- and Th-ores. The ID performance of the RID algorithm was quantified in terms of F-score for each isotope. The posterior probability is calculated to be 54.5-74.4% for 238U and 4.7-10.5% for 235U in EC-NRM171 uranium materials. For the case of the more complex gamma-ray spectra from CRMs, the total scoring (ST) method was preferred for its ID performance evaluation. It was shown that the present peak based Bayesian RID algorithm can be applied to identify 235U and 238U isotopes in LEU or natural U-Th samples if a medium energy resolution detector is was in the measurements.

Audio Segmentation and Classification Using Support Vector Machine and Fuzzy C-Means Clustering Techniques (서포트 벡터 머신과 퍼지 클러스터링 기법을 이용한 오디오 분할 및 분류)

  • Nguyen, Ngoc;Kang, Myeong-Su;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.19-26
    • /
    • 2012
  • The rapid increase of information imposes new demands of content management. The purpose of automatic audio segmentation and classification is to meet the rising need for efficient content management. With this reason, this paper proposes a high-accuracy algorithm that segments audio signals and classifies them into different classes such as speech, music, silence, and environment sounds. The proposed algorithm utilizes support vector machine (SVM) to detect audio-cuts, which are boundaries between different kinds of sounds using the parameter sequence. We then extract feature vectors that are composed of statistical data and they are used as an input of fuzzy c-means (FCM) classifier to partition audio-segments into different classes. To evaluate segmentation and classification performance of the proposed SVM-FCM based algorithm, we consider precision and recall rates for segmentation and classification accuracy for classification. Furthermore, we compare the proposed algorithm with other methods including binary and FCM classifiers in terms of segmentation performance. Experimental results show that the proposed algorithm outperforms other methods in both precision and recall rates.

Handwritten Image Segmentation by the Modified Area-based Region Selection Technique (변형된 면적기반영역선별 기법에 의한 문자영상분할)

  • Hwang Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.30-36
    • /
    • 2006
  • In this paper, a new type of written image segmentation based on relative comparison of region areas is proposed. The original image is composed of two distinctive regions; information and background. Compared with this binary original image, the observed one is the gray scale which is represented with complex regions with speckles and noise due to degradation or contamination. For applying threshold or statistical approach, there occurs the region-deformation problem in the process of binarization. At first step, the efficient iterated conditional mode (ICM) which takes the lozenge type block is used for regions formation into the binary image. Secondly the information region is estimated through selecting action and restored its primary state. Not only decision of the attachment to a region but also the calculation of the magnitude of its area are carried on at each current pixel iteratively. All region areas are sorted into a set and selected through the decision parameter which is obtained statistically. Our experiments show that these approaches are effective on ink-rubbed copy image (拓本 'Takbon') and efficient at shape restoration. Experiments on gray scale image show promising shape extraction results, comparing with the threshold-segmentation and conventional ICM method.

Document classification using a deep neural network in text mining (텍스트 마이닝에서 심층 신경망을 이용한 문서 분류)

  • Lee, Bo-Hui;Lee, Su-Jin;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.615-625
    • /
    • 2020
  • The document-term frequency matrix is a term extracted from documents in which the group information exists in text mining. In this study, we generated the document-term frequency matrix for document classification according to research field. We applied the traditional term weighting function term frequency-inverse document frequency (TF-IDF) to the generated document-term frequency matrix. In addition, we applied term frequency-inverse gravity moment (TF-IGM). We also generated a document-keyword weighted matrix by extracting keywords to improve the document classification accuracy. Based on the keywords matrix extracted, we classify documents using a deep neural network. In order to find the optimal model in the deep neural network, the accuracy of document classification was verified by changing the number of hidden layers and hidden nodes. Consequently, the model with eight hidden layers showed the highest accuracy and all TF-IGM document classification accuracy (according to parameter changes) were higher than TF-IDF. In addition, the deep neural network was confirmed to have better accuracy than the support vector machine. Therefore, we propose a method to apply TF-IGM and a deep neural network in the document classification.