• Title/Summary/Keyword: Unsupervised Probabilistic Model

Search Result 8, Processing Time 0.021 seconds

Range Detection of Wa/Kwa Parallel Noun Phrase using a Probabilistic Model and Modification Information (확률모형과 수식정보를 이용한 와/과 병렬사구 범위결정)

  • Choi, Yong-Seok;Shin, Ji-Ae;Choi, Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.2
    • /
    • pp.128-136
    • /
    • 2008
  • Recognition of parallel structure at early stage of sentence parsing can reduce the complexity of parsing. In this paper, we propose an unsupervised language-independent probabilistic model for recongition of parallel noun structures. The proposed model is based on the idea of swapping constituents, which replies the properties of symmetry (two or more identical constituents are repeated) and of reversibility (the order of constituents is inter-changeable) in parallel structures. The non-symmetric patterns that cannot be captured by the general symmetry rule are resolved additionally by the modifier information. In particular this paper shows how the proposed model is applied to recognize Korean parallel noun phrases connected by "wa/kwa" particle. Our model is compared with other models including supervised models and performs better on recongition of parallel noun phrases.

Unsupervised Semantic Role Labeling for Korean Adverbial Case (비지도 학습을 기반으로 한 한국어 부사격의 의미역 결정)

  • Kim, Byoung-Soo;Lee, Yong-Hun;Lee, Jong-Hyeok
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.2
    • /
    • pp.112-122
    • /
    • 2007
  • Training a statistical model for semantic role labeling requires a large amount of manually tagged corpus. However. such corpus does not exist for Korean and constructing one from scratch is a very long and tedious job. This paper suggests a modified algorithm of self-training, an unsupervised algorithm, which trains a semantic role labeling model from any raw corpora. For initial training, a small tagged corpus is automatically constructed iron case frames in Sejong Electronic Dictionary. Using the corpus, a probabilistic model is trained incrementally, which achieves 83.00% of accuracy in 4 selected adverbial cases.

Bayesian Model for Probabilistic Unsupervised Learning (확률적 자율 학습을 위한 베이지안 모델)

  • 최준혁;김중배;김대수;임기욱
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.9
    • /
    • pp.849-854
    • /
    • 2001
  • GTM(Generative Topographic Mapping) model is a probabilistic version of the SOM(Self Organizing Maps) which was proposed by T. Kohonen. The GTM is modelled by latent or hidden variables of probability distribution of data. It is a unique characteristic not implemented in SOM model, and, therefore, it is possible with GTM to analyze data accurately, thereby overcoming the limits of SOM. In the present investigation we proposed a BGTM(Bayesian GTM) combined with Bayesian learning and GTM model that has a small mis-classification ratio. By combining fast calculation ability and probabilistic distribution of data of GTM with correct reasoning based on Bayesian model, the BGTM model provided improved results, compared with existing models.

  • PDF

Range Detection of Wa/Kwa Parallel Noun Phrase by Alignment method (정렬기법을 활용한 와/과 병렬명사구 범위 결정)

  • Choe, Yong-Seok;Sin, Ji-Ae;Choe, Gi-Seon;Kim, Gi-Tae;Lee, Sang-Tae
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2008.10a
    • /
    • pp.90-93
    • /
    • 2008
  • In natural language, it is common that repetitive constituents in an expression are to be left out and it is necessary to figure out the constituents omitted at analyzing the meaning of the sentence. This paper is on recognition of boundaries of parallel noun phrases by figuring out constituents omitted. Recognition of parallel noun phrases can greatly reduce complexity at the phase of sentence parsing. Moreover, in natural language information retrieval, recognition of noun with modifiers can play an important role in making indexes. We propose an unsupervised probabilistic model that identifies parallel cores as well as boundaries of parallel noun phrases conjoined by a conjunctive particle. It is based on the idea of swapping constituents, utilizing symmetry (two or more identical constituents are repeated) and reversibility (the order of constituents is changeable) in parallel structure. Semantic features of the modifiers around parallel noun phrase, are also used the probabilistic swapping model. The model is language-independent and in this paper presented on parallel noun phrases in Korean language. Experiment shows that our probabilistic model outperforms symmetry-based model and supervised machine learning based approaches.

  • PDF

Topic Masks for Image Segmentation

  • Jeong, Young-Seob;Lim, Chae-Gyun;Jeong, Byeong-Soo;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.12
    • /
    • pp.3274-3292
    • /
    • 2013
  • Unsupervised methods for image segmentation are recently drawing attention because most images do not have labels or tags. A topic model is such an unsupervised probabilistic method that captures latent aspects of data, where each latent aspect, or a topic, is associated with one homogeneous region. The results of topic models, however, usually have noises, which decreases the overall segmentation performance. In this paper, to improve the performance of image segmentation using topic models, we propose two topic masks applicable to topic assignments of homogeneous regions obtained from topic models. The topic masks capture the noises among the assigned topic assignments or topic labels, and remove the noises by replacements, just like image masks for pixels. However, as the nature of topic assignments is different from image pixels, the topic masks have properties that are different from the existing image masks for pixels. There are two contributions of this paper. First, the topic masks can be used to reduce the noises of topic assignments obtained from topic models for image segmentation tasks. Second, we test the effectiveness of the topic masks by applying them to segmented images obtained from the Latent Dirichlet Allocation model and the Spatial Latent Dirichlet Allocation model upon the MSRC image dataset. The empirical results show that one of the masks successfully reduces the topic noises.

En-route Ground Speed Prediction and Posterior Inference Using Generative Model (생성 모형을 사용한 순항 항공기 향후 속도 예측 및 추론)

  • Paek, Hyunjin;Lee, Keumjin
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.27 no.4
    • /
    • pp.27-36
    • /
    • 2019
  • An accurate trajectory prediction is a key to the safe and efficient operations of aircraft. One way to improve trajectory prediction accuracy is to develop a model for aircraft ground speed prediction. This paper proposes a generative model for posterior aircraft ground speed prediction. The proposed method fits the Gaussian Mixture Model(GMM) to historical data of aircraft speed, and then the model is used to generates probabilistic speed profile of the aircraft. The performances of the proposed method are demonstrated with real traffic data in Incheon Flight Information Region(FIR).

Weighted Local Naive Bayes Link Prediction

  • Wu, JieHua;Zhang, GuoJi;Ren, YaZhou;Zhang, XiaYan;Yang, Qiao
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.914-927
    • /
    • 2017
  • Weighted network link prediction is a challenge issue in complex network analysis. Unsupervised methods based on local structure are widely used to handle the predictive task. However, the results are still far from satisfied as major literatures neglect two important points: common neighbors produce different influence on potential links; weighted values associated with links in local structure are also different. In this paper, we adapt an effective link prediction model-local naive Bayes model into a weighted scenario to address this issue. Correspondingly, we propose a weighted local naive Bayes (WLNB) probabilistic link prediction framework. The main contribution here is that a weighted cluster coefficient has been incorporated, allowing our model to inference the weighted contribution in the predicting stage. In addition, WLNB can extensively be applied to several classic similarity metrics. We evaluate WLNB on different kinds of real-world weighted datasets. Experimental results show that our proposed approach performs better (by AUC and Prec) than several alternative methods for link prediction in weighted complex networks.

Non-Simultaneous Sampling Deactivation during the Parameter Approximation of a Topic Model

  • Jeong, Young-Seob;Jin, Sou-Young;Choi, Ho-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.1
    • /
    • pp.81-98
    • /
    • 2013
  • Since Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) were introduced, many revised or extended topic models have appeared. Due to the intractable likelihood of these models, training any topic model requires to use some approximation algorithm such as variational approximation, Laplace approximation, or Markov chain Monte Carlo (MCMC). Although these approximation algorithms perform well, training a topic model is still computationally expensive given the large amount of data it requires. In this paper, we propose a new method, called non-simultaneous sampling deactivation, for efficient approximation of parameters in a topic model. While each random variable is normally sampled or obtained by a single predefined burn-in period in the traditional approximation algorithms, our new method is based on the observation that the random variable nodes in one topic model have all different periods of convergence. During the iterative approximation process, the proposed method allows each random variable node to be terminated or deactivated when it is converged. Therefore, compared to the traditional approximation ways in which usually every node is deactivated concurrently, the proposed method achieves the inference efficiency in terms of time and memory. We do not propose a new approximation algorithm, but a new process applicable to the existing approximation algorithms. Through experiments, we show the time and memory efficiency of the method, and discuss about the tradeoff between the efficiency of the approximation process and the parameter consistency.