• Title/Summary/Keyword: Process variants

Search Result 103, Processing Time 0.031 seconds

Flail arm syndrome with several issues related to the diagnostic process

  • Kim, Jae-Youn;Park, Yun Kyung;Yoon, Bora;Lee, Kee Ook;Kim, Yong-Duk;Na, Sang-Jun
    • Annals of Clinical Neurophysiology
    • /
    • v.19 no.1
    • /
    • pp.68-70
    • /
    • 2017
  • Flail arm syndrome (FAS), known as one of the atypical amyotrophic lateral sclerosis (ALS) variants, has a similar clinical course and pathologic findings as ALS. Therefore it is difficult to differentiate between ALS and FAS at a glance. There are few reports involving individual analysis of FAS patients to date. The findings of polysomnography (PSG) in patient with FAS are not well known. We report a male FAS patient with review of literatures and several issues related to the diagnostic process.

Effects of Word Frequency on a Lenition Process: Evidence from Stop Voicing and /h/ Reduction in Korean

  • Choi, Tae-Hwan;Lim, Nam-Sil;Han, Jeong-Im
    • Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.35-48
    • /
    • 2006
  • The present study examined whether words with higher frequency have more exposure to the lenition process such as intervocalic stop voicing or /h/ reduction in the production of the Korean speakers. Experiment 1 and Experiment 2 tested if word-internal intervocalic voicing and /h/ reduction occur more often in the words with higher frequency than less frequent words respectively. Results showed that the rate of voicing was not significantly different between the high frequency group and the low frequency group; rather both high and low frequency words were shown to be fully voiced in this prosodic position. However, intervocalic /h/s were deleted more in high frequency words than in low frequency words. Low frequency words showed that other phonetic variants such as [h] and [w] were found more often than in high frequency group. Thus the results of the present study are indefinitive as to the relationship between the word frequency and lenition with the data at hand.

  • PDF

Beta-Meta: a meta-analysis application considering heterogeneity among genome-wide association studies

  • Gyungbu Kim;Yoonsuk Lee;Jeong Ho Park;Dongmin Kim;Wonseok Lee
    • Genomics & Informatics
    • /
    • v.20 no.4
    • /
    • pp.49.1-49.7
    • /
    • 2022
  • Many packages for a meta-analysis of genome-wide association studies (GWAS) have been developed to discover genetic variants. Although variations across studies must be considered, there are not many currently-accessible packages that estimate between-study heterogeneity. Thus, we propose a python based application called Beta-Meta which can easily process a meta-analysis by automatically selecting between a fixed effects and a random effects model based on heterogeneity. Beta-Meta implements flexible input data manipulation to allow multiple meta-analyses of different genotype-phenotype associations in a single process. It provides a step-by-step meta-analysis of GWAS for each association in the following order: heterogeneity test, two different calculations of an effect size and a p-value based on heterogeneity, and the Benjamini-Hochberg p-value adjustment. These methods enable users to validate the results of individual studies with greater statistical power and better estimation precision. We elaborate on these and illustrate them with examples from several studies of infertility-related disorders.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

Grid-based Gaussian process models for longitudinal genetic data

  • Chung, Wonil
    • Communications for Statistical Applications and Methods
    • /
    • v.29 no.1
    • /
    • pp.65-83
    • /
    • 2022
  • Although various statistical methods have been developed to map time-dependent genetic factors, most identified genetic variants can explain only a small portion of the estimated genetic variation in longitudinal traits. Gene-gene and gene-time/environment interactions are known to be important putative sources of the missing heritability. However, mapping epistatic gene-gene interactions is extremely difficult due to the very large parameter spaces for models containing such interactions. In this paper, we develop a Gaussian process (GP) based nonparametric Bayesian variable selection method for longitudinal data. It maps multiple genetic markers without restricting to pairwise interactions. Rather than modeling each main and interaction term explicitly, the GP model measures the importance of each marker, regardless of whether it is mostly due to a main effect or some interaction effect(s), via an unspecified function. To improve the flexibility of the GP model, we propose a novel grid-based method for the within-subject dependence structure. The proposed method can accurately approximate complex covariance structures. The dimension of the covariance matrix depends only on the number of fixed grid points although each subject may have different numbers of measurements at different time points. The deviance information criterion (DIC) and the Bayesian predictive information criterion (BPIC) are proposed for selecting an optimal number of grid points. To efficiently draw posterior samples, we combine a hybrid Monte Carlo method with a partially collapsed Gibbs (PCG) sampler. We apply the proposed GP model to a mouse dataset on age-related body weight.

Fast Hilbert R-tree Bulk-loading Scheme using GPGPU (GPGPU를 이용한 Hilbert R-tree 벌크로딩 고속화 기법)

  • Yang, Sidong;Choi, Wonik
    • Journal of KIISE
    • /
    • v.41 no.10
    • /
    • pp.792-798
    • /
    • 2014
  • In spatial databases, R-tree is one of the most widely used indexing structures and many variants have been proposed for its performance improvement. Among these variants, Hilbert R-tree is a representative method using Hilbert curve to process large amounts of data without high cost split techniques to construct the R-tree. This Hilbert R-tree, however, is hardly applicable to large-scale applications in practice mainly due to high pre-processing costs and slow bulk-load time. To overcome the limitations of Hilbert R-tree, we propose a novel approach for parallelizing Hilbert mapping and thus accelerating bulk-loading of Hilbert R-tree on GPU memory. Hilbert R-tree based on GPU improves bulk-loading performance by applying the inversed-cell method and exploiting parallelism for packing the R-tree structure. Our experimental results show that the proposed scheme is up to 45 times faster compared to the traditional CPU-based bulk-loading schemes.

APPLICATION OF RANDOMLY AMPLIFIED POLYMORPHIC DNA(RAPD) ANALYSIS METHOD FOR CLASSIFICATION AND BREEDING OF THE KOREAN GINSENG

  • Lim Y.P.;Shin C.S.;Lee S.J.;Youn Y.N.;Jo J.S.
    • Proceedings of the Ginseng society Conference
    • /
    • 1993.09a
    • /
    • pp.138-142
    • /
    • 1993
  • Korean ginseng has been widely used as medicine from ancient times in Asia. Current breeding efforts in Korea include the individual plant selection and the subsequent pure - line isolation, and considerable number of lines with desirable traits have thus been isolated. However, there were rare data on genetic maker and its analysis for selection of superior varieties. For taxonomic characterization and development of genetic markers for ginseng breeding, molecular biological methods including the RFLP and RAPD methods were applied. Cytoplasmic DNA of ginseng was analyzed for RFLP analysis. However. there is no different pattern among the chloroplast DNA or mitochondrial DNA of variants. In the case of RAPD analysis, the band patterns using 4 of 10 RAPD primers show the distinctive polymorphism among 9 ginseng variants, and lines, and Similarity Index(SI) on polymorphism was calculated for the extent and nature of these variabilities in ginseng. The sequences of 4 selected primers were TGCCGAGCTG, AATCGGGCTG. GAAACGGGTG, and GTGACGTAGG. By SI based on the polymorphic band patterns, Chungkyung - Chong and Hwangskoog - Chong, and JakyungChong 81783 and Jinjakyung of Russia showed the most close SI. The data of KG10l coincided with the fact that it was released from Hwangskoog - Chong. and Jakyung - Chong 81783 and Jinjakyung of Russia showed the most close SI. The data of KG101 coincided with the fact that it was released from Hwangskoog - Chong by breeding process. The data of Jakyung strains indicated the significant variation among the strains. From these results, RAPD analysis method could be succesively applied to the classification and genetic analysis for breeding of Korean ginseng.

  • PDF

The Daylight and Energy Performance Evaluation of Multi-purpose Solar Window System Using Simulaton Program (시뮬레이션에 의한 다기능 복합 솔라윈도우 시스템의 채광과 에너지성능평가)

  • Jeong, Yeol-Wha;Lee, Seun-Myung
    • Journal of the Korean Solar Energy Society
    • /
    • v.31 no.6
    • /
    • pp.103-110
    • /
    • 2011
  • The aim of this study was to analysis the Heating/cooling performance and Daylighting performance of Solar Window System built in apartments. the solar window is the idea to integrate daylight as a third form of solar energy into a PV/Solar Collector system. The process of this study is as follows: 1)Solar Window system was designed through the investigation of previous paper and work. 2)The simulation program(Lightscape3.2) was used in daylighting performance analysis. the reference model of simulation was made up to analysis daylighting performance on Solar Window system. 3)The simulation program(ESP-r, Therm5.0, Window6.0) was used in energy performance analysis. the reference model of simulation was made up to analysis energy and daylighting performance on Solar Window system. 4)The Size of Simulation model for daylighting and heating/cooling energy analysis was $148.5m^2$ 5)The lighting performance analysis was carried out with various variants, such as the size and installed area of Solar Window system. 6)Energy performance simulation was carried out with various variants, such as Integrated U-value of Solar Window system according to its position, installed angle and insulation thickness. Consequently, When Solar Window system is equipped with balcony window of Apartment, Annual heating and cooling energy of reference model was cut down at the average of $4.1kWh/m^2$ or 4.2%.

Automatic Korean to English Cross Language Keyword Assignment Using MeSH Thesaurus (MeSH 시소러스를 이용한 한영 교차언어 키워드 자동 부여)

  • Lee Jae-Sung;Kim Mi-Suk;Oh Yong-Soon;Lee Young-Sung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.155-162
    • /
    • 2006
  • The medical thesaurus, MeSH (Medical Subject Heading), has been used as a controlled vocabulary thesaurus for English medical paper indexing for a long time. In this paper, we propose an automatic cross language keyword assignment method, which assigns English MeSH index terms to the abstract of a Korean medical paper. We compare the performance with the indexing performance of human indexers and the authors. The procedure of index term assignment is that first extracting Korean MeSH terms from text, changing these terms into the corresponding English MeSH terms, and calculating the importance of the terms to find the highest rank terms as the keywords. For the process, an effective method to solve spacing variants problem is proposed. Experiment showed that the method solved the spacing variant problem and reduced the thesaurus space by about 42%. And the experiment also showed that the performance of automatic keyword assignment is much less than that of human indexers but is as good as that of authors.