• Title/Summary/Keyword: word similarity

Search Result 297, Processing Time 0.027 seconds

A Study on Solving Word Problems through the Articulation of Analogical Mapping (유추 사상의 명료화를 통한 문장제 해결에 관한 연구)

  • Kim, Ji Eun;Shin, Jaehong
    • Communications of Mathematical Education
    • /
    • v.27 no.4
    • /
    • pp.429-448
    • /
    • 2013
  • The aim of this study was to examine how analogical mapping articulation activity played a role in solving process in word problems. We analyzed the problem solving strategies and processes that the participating thirty-three 8th grade students employed when solving the problems through analogical mapping articulation activities, and also the characteristics of the thinking processes from the aspects of similarity. As a result, this study indicates that analogical mapping articulation activity could be helpful when the students solved similar word problems, although some of them gained correct answers through pseudo-analytic thinking. Not to have them use pseudo-analytic thinking, it might be necessary to help them recognize superficial similarity and difference among the problems and construct structural similarity to know the principle of solution associated with the problematic situations.

Searching Similar Example-Sentences Using the Needleman-Wunsch Algorithm (Needleman-Wunsch 알고리즘을 이용한 유사예문 검색)

  • Kim Dong-Joo;Kim Han-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.4 s.42
    • /
    • pp.181-188
    • /
    • 2006
  • In this paper, we propose a search algorithm for similar example-sentences in the computer-aided translation. The search for similar examples, which is a main part in the computer-aided translation, is to retrieve the most similar examples in the aspect of structural and semantical analogy for a given query from examples. The proposed algorithm is based on the Needleman-Wunsch algorithm, which is used to measure similarity between protein or nucleotide sequences in bioinformatics. If the original Needleman-Wunsch algorithm is applied to the search for similar sentences, it is likely to fail to find them since similarity is sensitive to word's inflectional components. Therefore, we use the lemma in addition to (typographical) surface information. In addition, we use the part-of-speech to capture the structural analogy. In other word, this paper proposes the similarity metric combining the surface, lemma, and part-of-speech information of a word. Finally, we present a search algorithm with the proposed metric and present pairs contributed to similarity between a query and a found example. Our algorithm shows good performance in the area of electricity and communication.

  • PDF

An Analysis of the Pseudo-analytical Thought and Analytical Thought that Students Do in the Process of Algebra Problem Solving (대수 문장제 해결 과정에서 나타나는 擬似(의사) 분석적 사고와 분석적 사고에 대한 분석 - 중학생 대상의 사례 연구 -)

  • Park, Hyun-Jeong;Lee, Chong-Hee
    • Journal of Educational Research in Mathematics
    • /
    • v.17 no.1
    • /
    • pp.67-90
    • /
    • 2007
  • The purpose of this study is to understand students' thinking process in the algebra problem solving, on the base of the works of Vinner(1997a, 1997b). Thus, two middle school students were evaluated in this case study to examine how they think to solve algebra word problems. The following question was considered to analyze the thinking process from the similarity-based perspective by focusing on the process of solving algebra word problems; What is the relationship between similarity and the characteristics of thinking process at the time of successful and unsuccessful problem solving? The following results were obtained by analyzing the success or failure in problem solving based on the characteristics of thinking process and similarity composition. Successful problem solving can be based on pseudo-analytical thought and analytical thought. The former is the rule applied in the process of applying closed formulas that is constructed structural similarity not related with the situations described in the text. The latter means that control and correction occurred in all stages of problem solution. The knowledge needed for solutions was applied with the formulation of open-end formulas that is constructed structural similarity in which memory and modification with the related principles or concepts. In conclusion, the student's perception on the principles involved in a solution is very important in solving algebraic word problems.

  • PDF

Word Sense Disambiguation using Korean Word Space Model (한국어 단어 공간 모델을 이용한 단어 의미 중의성 해소)

  • Park, Yong-Min;Lee, Jae-Sung
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.6
    • /
    • pp.41-47
    • /
    • 2012
  • Various Korean word sense disambiguation methods have been proposed using small scale of sense-tagged corpra and dictionary definitions to calculate entropy information, conditional probability, mutual information and etc. for each method. This paper proposes a method using Korean Word Space model which builds word vectors from a large scale of sense-tagged corpus and disambiguates word senses with the similarity calculation between the word vectors. Experiment with Sejong morph sense-tagged corpus showed 94% precision for 200 sentences(583 word types), which is much superior to the other known methods.

Query Extension of Retrieve System Using Hangul Word Embedding and Apriori (한글 워드임베딩과 아프리오리를 이용한 검색 시스템의 질의어 확장)

  • Shin, Dong-Ha;Kim, Chang-Bok
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.6
    • /
    • pp.617-624
    • /
    • 2016
  • The hangul word embedding should be performed certainly process for noun extraction. Otherwise, it should be trained words that are not necessary, and it can not be derived efficient embedding results. In this paper, we propose model that can retrieve more efficiently by query language expansion using hangul word embedded, apriori, and text mining. The word embedding and apriori is a step expanding query language by extracting association words according to meaning and context for query language. The hangul text mining is a step of extracting similar answer and responding to the user using noun extraction, TF-IDF, and cosine similarity. The proposed model can improve accuracy of answer by learning the answer of specific domain and expanding high correlation query language. As future research, it needs to extract more correlation query language by analysis of user queries stored in database.

Graph-Based Word Sense Disambiguation Using Iterative Approach (반복적 기법을 사용한 그래프 기반 단어 모호성 해소)

  • Kang, Sangwoo
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.2
    • /
    • pp.102-110
    • /
    • 2017
  • Current word sense disambiguation techniques employ various machine learning-based methods. Various approaches have been proposed to address this problem, including the knowledge base approach. This approach defines the sense of an ambiguous word in accordance with knowledge base information with no training corpus. In unsupervised learning techniques that use a knowledge base approach, graph-based and similarity-based methods have been the main research areas. The graph-based method has the advantage of constructing a semantic graph that delineates all paths between different senses that an ambiguous word may have. However, unnecessary semantic paths may be introduced, thereby increasing the risk of errors. To solve this problem and construct a fine-grained graph, in this paper, we propose a model that iteratively constructs the graph while eliminating unnecessary nodes and edges, i.e., senses and semantic paths. The hybrid similarity estimation model was applied to estimate a more accurate sense in the constructed semantic graph. Because the proposed model uses BabelNet, a multilingual lexical knowledge base, the model is not limited to a specific language.

Analyzing Errors in Bilingual Multi-word Lexicons Automatically Constructed through a Pivot Language

  • Seo, Hyeong-Won;Kim, Jae-Hoon
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.39 no.2
    • /
    • pp.172-178
    • /
    • 2015
  • Constructing a bilingual multi-word lexicon is confronted with many difficulties such as an absence of a commonly accepted gold-standard dataset. Besides, in fact, there is no everybody's definition of what a multi-word unit is. In considering these problems, this paper evaluates and analyzes the context vector approach which is one of a novel alignment method of constructing bilingual lexicons from parallel corpora, by comparing with one of general methods. The approach builds context vectors for both source and target single-word units from two parallel corpora. To adapt the approach to multi-word units, we identify all multi-word candidates (namely noun phrases in this work) first, and then concatenate them into single-word units. As a result, therefore, we can use the context vector approach to satisfy our need for multi-word units. In our experimental results, the context vector approach has shown stronger performance over the other approach. The contribution of the paper is analyzing the various types of errors for the experimental results. For the future works, we will study the similarity measure that not only covers a multi-word unit itself but also covers its constituents.

A Text Similarity Measurement Method Based on Singular Value Decomposition and Semantic Relevance

  • Li, Xu;Yao, Chunlong;Fan, Fenglong;Yu, Xiaoqiang
    • Journal of Information Processing Systems
    • /
    • v.13 no.4
    • /
    • pp.863-875
    • /
    • 2017
  • The traditional text similarity measurement methods based on word frequency vector ignore the semantic relationships between words, which has become the obstacle to text similarity calculation, together with the high-dimensionality and sparsity of document vector. To address the problems, the improved singular value decomposition is used to reduce dimensionality and remove noises of the text representation model. The optimal number of singular values is analyzed and the semantic relevance between words can be calculated in constructed semantic space. An inverted index construction algorithm and the similarity definitions between vectors are proposed to calculate the similarity between two documents on the semantic level. The experimental results on benchmark corpus demonstrate that the proposed method promotes the evaluation metrics of F-measure.

The neighborhood size and frequency effect in Korean words (한국어 단어재인에서 나타나는 이웃효과)

  • Kwon You-An;Cho Hye-Suk;Nam Ki-Chun
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.117-120
    • /
    • 2006
  • This paper examined two hypotheses. Firstly, if the first syllable of word play an important role in visual word recognition, it may be the unit of word neighbor. Secondly, if the first syllable is the unit of lexical access, the neighborhood size effect and the neighborhood frequency effect would appear in a lexical decision task and a form primed lexical decision task. We conducted two experiments. Experiment 1 showed that words had large neighbors made a inhibitory effect in the LDT(lexical decision task). Experiment 2 showed the interaction between the neighborhood frequency effectand the word form similarity in the form primed LDT. We concluded that the first syllable in Korean words might be the unit of word neighborhood and play a central role in a lexical access.

  • PDF

A Method for Learning the Specialized Meaning of Terminology through Mixed Word Embedding (혼합 임베딩을 통한 전문 용어 의미 학습 방안)

  • Kim, Byung Tae;Kim, Nam Gyu
    • The Journal of Information Systems
    • /
    • v.30 no.2
    • /
    • pp.57-78
    • /
    • 2021
  • Purpose In this study, first, we try to make embedding results that reflect the characteristics of both professional and general documents. In addition, when disparate documents are put together as learning materials for natural language processing, we try to propose a method that can measure the degree of reflection of the characteristics of individual domains in a quantitative way. Approach For this study, the Korean Supreme Court Precedent documents and Korean Wikipedia are selected as specialized documents and general documents respectively. After extracting the most similar word pairs and similarities of unique words observed only in the specialized documents, we observed how those values were changed in the process of embedding with general documents. Findings According to the measurement methods proposed in this study, it was confirmed that the degree of specificity of specialized documents was relaxed in the process of combining with general documents, and that the degree of dissolution could have a positive correlation with the size of general documents.