• Title/Summary/Keyword: Word Alignment

Search Result 47, Processing Time 0.029 seconds

A Model-Based Method for Information Alignment: A Case Study on Educational Standards

  • Choi, Namyoun;Song, Il-Yeol;Zhu, Yongjun
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.3
    • /
    • pp.85-94
    • /
    • 2016
  • We propose a model-based method for information alignment using educational standards as a case study. Discrepancies and inconsistencies in educational standards across different states/cities hinder the retrieval and sharing of educational resources. Unlike existing educational standards alignment systems that only give binary judgments (either "aligned" or "not-aligned"), our proposed system classifies each pair of educational standard statements in one of seven levels of alignments: Strongly Fully-aligned, Weakly Fully-aligned, Partially-$aligned^{***}$, Partially-$aligned^{**}$, Partially-$aligned^*$, Poorly-aligned, and Not-aligned. Such a 7-level categorization extends the notion of binary alignment and provides a finer-grained system for comparing educational standards that can broaden categories of resource discovery and retrieval. This study continues our previous use of mathematics education as a domain, because of its generally unambiguous concepts. We adopt a materialization pattern (MP) model developed in our earlier work to represent each standard statement as a verb-phrase graph and a noun-phrase graph; we align a pair of statements using graph matching based on Bloom's Taxonomy, WordNet, and taxonomy of mathematics concepts. Our experiments on data sets of mathematics educational standards show that our proposed system can provide alignment results with a high degree of agreement with domain expert's judgments.

Global Sequence Homology Detection Using Word Conservation Probability

  • Yang, Jae-Seong;Kim, Dae-Kyum;Kim, Jin-Ho;Kim, Sang-Uk
    • Interdisciplinary Bio Central
    • /
    • v.3 no.4
    • /
    • pp.14.1-14.9
    • /
    • 2011
  • Protein homology detection is an important issue in comparative genomics. Because of the exponential growth of sequence databases, fast and efficient homology detection tools are urgently needed. Currently, for homology detection, sequence comparison methods using local alignment such as BLAST are generally used as they give a reasonable measure for sequence similarity. However, these methods have drawbacks in offering overall sequence similarity, especially in dealing with eukaryotic genomes that often contain many insertions and duplications on sequences. Also these methods do not provide the explicit models for speciation, thus it is difficult to interpret their similarity measure into homology detection. Here, we present a novel method based on Word Conservation Score (WCS) to address the current limitations of homology detection. Instead of counting each amino acid, we adopted the concept of 'Word' to compare sequences. WCS measures overall sequence similarity by comparing word contents, which is much faster than BLAST comparisons. Furthermore, evolutionary distance between homologous sequences could be measured by WCS. Therefore, we expect that sequence comparison with WCS is useful for the multiple-species-comparisons of large genomes. In the performance comparisons on protein structural classifications, our method showed a considerable improvement over BLAST. Our method found bigger micro-syntenic blocks which consist of orthologs with conserved gene order. By testing on various datasets, we showed that WCS gives faster and better overall similarity measure compared to BLAST.

Automatic Inter-Phoneme Similarity Calculation Method Using PAM Matrix Model (PAM 행렬 모델을 이용한 음소 간 유사도 자동 계산 기법)

  • Kim, Sung-Hwan;Cho, Hwan-Gue
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.3
    • /
    • pp.34-43
    • /
    • 2012
  • Determining the similarity between two strings can be applied various area such as information retrieval, spell checker and spam filtering. Similarity calculation between Korean strings based on dynamic programming methods firstly requires a definition of the similarity between phonemes. However, existing methods have a limitation that they use manually set similarity scores. In this paper, we propose a method to automatically calculate inter-phoneme similarity from a given set of variant words using a PAM-like probabilistic model. Our proposed method first finds the pairs of similar words from a given word set, and derives derivation rules from text alignment results among the similar word pairs. Then, similarity scores are calculated from the frequencies of variations between different phonemes. As an experimental result, we show an improvement of 10.1%~14.1% and 8.1%~11.8% in terms of sensitivity compared with the simple match-mismatch scoring scheme and the manually set inter-phoneme similarity scheme, respectively, with a specificity of 77.2%~80.4%.

Nonlinear Vector Alignment Methodology for Mapping Domain-Specific Terminology into General Space (전문어의 범용 공간 매핑을 위한 비선형 벡터 정렬 방법론)

  • Kim, Junwoo;Yoon, Byungho;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.127-146
    • /
    • 2022
  • Recently, as word embedding has shown excellent performance in various tasks of deep learning-based natural language processing, researches on the advancement and application of word, sentence, and document embedding are being actively conducted. Among them, cross-language transfer, which enables semantic exchange between different languages, is growing simultaneously with the development of embedding models. Academia's interests in vector alignment are growing with the expectation that it can be applied to various embedding-based analysis. In particular, vector alignment is expected to be applied to mapping between specialized domains and generalized domains. In other words, it is expected that it will be possible to map the vocabulary of specialized fields such as R&D, medicine, and law into the space of the pre-trained language model learned with huge volume of general-purpose documents, or provide a clue for mapping vocabulary between mutually different specialized fields. However, since linear-based vector alignment which has been mainly studied in academia basically assumes statistical linearity, it tends to simplify the vector space. This essentially assumes that different types of vector spaces are geometrically similar, which yields a limitation that it causes inevitable distortion in the alignment process. To overcome this limitation, we propose a deep learning-based vector alignment methodology that effectively learns the nonlinearity of data. The proposed methodology consists of sequential learning of a skip-connected autoencoder and a regression model to align the specialized word embedding expressed in each space to the general embedding space. Finally, through the inference of the two trained models, the specialized vocabulary can be aligned in the general space. To verify the performance of the proposed methodology, an experiment was performed on a total of 77,578 documents in the field of 'health care' among national R&D tasks performed from 2011 to 2020. As a result, it was confirmed that the proposed methodology showed superior performance in terms of cosine similarity compared to the existing linear vector alignment.

Removal of Heterogeneous Candidates Using Positional Accuracy Based on Levenshtein Distance on Isolated n-best Recognition (레벤스타인 거리 기반의 위치 정확도를 이용하여 다중 음성 인식 결과에서 관련성이 적은 후보 제거)

  • Yun, Young-Sun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.8
    • /
    • pp.428-435
    • /
    • 2011
  • Many isolated word recognition systems may generate irrelevant words for recognition results because they use only acoustic information or small amount of language information. In this paper, I propose word similarity that is used for selecting (or removing) less common words from candidates by applying Levenshtein distance. Word similarity is obtained by using positional accuracy that reflects the frequency information along to character's alignment information. This paper also discusses various improving techniques of selection of disparate words. The methods include different loss values, phone accuracy based on confusion information, weights of candidates by ranking order and partial comparisons. Through experiments, I found that the proposed methods are effective for removing heterogeneous words without loss of performance.

Ontology Matching Method Based on Word Embedding and Structural Similarity

  • Hongzhou Duan;Yuxiang Sun;Yongju Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.3
    • /
    • pp.75-88
    • /
    • 2023
  • In a specific domain, experts have different understanding of domain knowledge or different purpose of constructing ontology. These will lead to multiple different ontologies in the domain. This phenomenon is called the ontology heterogeneity. For research fields that require cross-ontology operations such as knowledge fusion and knowledge reasoning, the ontology heterogeneity has caused certain difficulties for research. In this paper, we propose a novel ontology matching model that combines word embedding and a concatenated continuous bag-of-words model. Our goal is to improve word vectors and distinguish the semantic similarity and descriptive associations. Moreover, we make the most of textual and structural information from the ontology and external resources. We represent the ontology as a graph and use the SimRank algorithm to calculate the structural similarity. Our approach employs a similarity queue to achieve one-to-many matching results which provide a wider range of insights for subsequent mining and analysis. This enhances and refines the methodology used in ontology matching.

The Linear Constituent Order of the Noun Phrase: An Optimality Theoretic Account

  • Chung, Chin-Wan
    • English Language & Literature Teaching
    • /
    • v.9 no.1
    • /
    • pp.23-48
    • /
    • 2003
  • This paper provides an analysis of the linear constituent order of the NP in three different types of languages based on 33 languages: the NP with the prenominal modifiers, the NP with the postnominal modifiers, and the NP with both prenominal and postnominal modifiers (the mixed NP). Languages have NPs that feature different linear order, of the NP constituents. We attribute such different linear constituent orders within the NP to the linguistic distance and the limits imposed by the constituency and adjacency. We use the various kinds of alignment constraints which properly reflect the linguistic distance between the noun and each constituent. Language universals on word order provide us some general orders of various NP constituents. If we adopt the linguistic distance, the limits imposed by the constituency and the adjacency, and the alignment constraints, we can explain the complicated differences of NP constituent orders of languages of the world.

  • PDF

Aligning Word Correspondence in Korean-Japanese Parallel Texts (한국어-일본어 정렬 기법 연구)

  • Kim, Tae-Wan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2001.04a
    • /
    • pp.293-296
    • /
    • 2001
  • 병렬 코퍼스의 확보가 과거에 비해 용이하게 됨에 따라 기계번역, 다국어 정보 검색 등 언어처리시스템에 사용하기 위한 대역 사전 구축의 도구로서 정렬(Alignment) 기법에 대한 연구가 필요하다. 본 논문에서는 한국어-일본어 병렬 코퍼스를 이용한 정렬 기법에 관하여 제안한다.

  • PDF

Performance Improvement of Bilingual Lexicon Extraction via Pivot Language and Word Alignment Tool (중간언어와 단어정렬을 통한 이중언어 사전의 자동 추출에 대한 성능 개선)

  • Kwon, Hong-Seok;Seo, Hyeung-Won;Kim, Jae-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2013.10a
    • /
    • pp.27-32
    • /
    • 2013
  • 본 논문은 잘 알려지지 않은 언어 쌍에 대해서 병렬말뭉치(parallel corpus)로부터 자동으로 이중언어 사전을 추출하는 방법을 제안하였다. 이 방법은 중간언어(pivot language)를 매개로 하고 문맥 벡터를 생성하기 위해 공개된 단어 정렬 도구인 Anymalign을 사용하였다. 그 결과로 초기사전(seed dictionary)을 사용한 문맥벡터의 번역 과정이 필요 없으며 통계적 방법의 약점인 낮은 빈도수를 가지는 어휘에 대한 번역 정확도를 높였다. 또한 문맥벡터의 요소 값으로 특정 임계값 이상을 가지는 양방향 번역 확률 정보를 사용하여 상위 5위 이내의 번역 정확도를 크게 높였다. 본 논문은 두 개의 서로 다른 언어 쌍 한국어-스페인어 그리고 한국어-프랑스어 양방향에 대해서 각각 이중언어 사전을 추출하는 실험을 하였다. 높은 빈도수를 가지는 어휘에 대한 번역 정확도는 이전 연구에서 보인 실험 결과에 비해 최소 3.41% 최대 67.91%의 성능 향상을 보였고 낮은 빈도수를 가지는 어휘에 대한 번역 정확도는 최소 5.06%, 최대 990%의 성능 향상을 보였다.

  • PDF