• Title/Summary/Keyword: Syntactic Analysis

Search Result 261, Processing Time 0.022 seconds

Terminology Recognition System based on Machine Learning for Scientific Document Analysis (과학 기술 문헌 분석을 위한 기계학습 기반 범용 전문용어 인식 시스템)

  • Choi, Yun-Soo;Song, Sa-Kwang;Chun, Hong-Woo;Jeong, Chang-Hoo;Choi, Sung-Pil
    • The KIPS Transactions:PartD
    • /
    • v.18D no.5
    • /
    • pp.329-338
    • /
    • 2011
  • Terminology recognition system which is a preceding research for text mining, information extraction, information retrieval, semantic web, and question-answering has been intensively studied in limited range of domains, especially in bio-medical domain. We propose a domain independent terminology recognition system based on machine learning method using dictionary, syntactic features, and Web search results, since the previous works revealed limitation on applying their approaches to general domain because their resources were domain specific. We achieved F-score 80.8 and 6.5% improvement after comparing the proposed approach with the related approach, C-value, which has been widely used and is based on local domain frequencies. In the second experiment with various combinations of unithood features, the method combined with NGD(Normalized Google Distance) showed the best performance of 81.8 on F-score. We applied three machine learning methods such as Logistic regression, C4.5, and SVMs, and got the best score from the decision tree method, C4.5.

A Business Service Identification Techniques Based on XL-BPMN Model (XL-BPMN 모델 기반 비즈니스 서비스 식별 기법)

  • Song, Chee-Yang;Cho, Eun-Sook
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.3
    • /
    • pp.125-138
    • /
    • 2016
  • The service identification in service-oriented developments has been conducted by based on workflow, goals, scenarios, usecases, components, features, and patterns. However, the identification of service by semantic approach at the business value view was not detailed yet. In order to enhance accuracy of identifying business service, this paper proposes a method for identifying business service by analyzing syntax and semantics in XL-BPMN model. The business processes based on business scenario are identified, and they are designed in a XL-BPMN business process model. In this business process model, an unit business service is identified through binding closely related activities by the integrated analysis result of syntax patterns and properties-based semantic similarities between activities. The method through XL-BPMN model at upper business levels can identify the reusable unit business service with high accuracy and modularity. It also can accelerate more service-oriented developments by reusing identified services.

Detecting Inconsistent Code Identifiers (코드 비 일관적 식별자 검출 기법)

  • Lee, Sungnam;Kim, Suntae;Park, Sooyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.5
    • /
    • pp.319-328
    • /
    • 2013
  • Software maintainers try to comprehend software source code by intensively using source code identifiers. Thus, use of inconsistent identifiers throughout entire source code causes to increase cost of software maintenance. Although participants can adopt peer reviews to handle this problem, it might be impossible to go through entire source code if the volume of code is huge. This paper introduces an approach to automatically detecting inconsistent identifiers of Java source code. This approach consists of tokenizing and POS tagging all identifiers in the source code, classifying syntactic and semantic similar terms, and finally detecting inconsistent identifiers by applying proposed rules. In addition, we have developed tool support, named CodeAmigo, to support the proposed approach. We applied it to two popular Java based open source projects in order to show feasibility of the approach by computing precision.

Intensifiers in Korean, English and German: Focusing on their non-head-bound-use (한국어, 영어 그리고 독일어의 강화사: 비결속 용법을 중심으로)

  • 최규련
    • Language and Information
    • /
    • v.7 no.2
    • /
    • pp.31-58
    • /
    • 2003
  • The main goal of this paper is to describe and analyse intensifiers, especially non-head-bound-intensifiers (NHBIs), which can be included in the discussion and analysis of these elements as focus particles. In doing so, NHBIs such as Korean susulo, casin/cache, English x-self and German selbst are dealt with in a rather cross-linguistical perspective. The pure and strict comparison between Korean, English and German is not intended. This paper is mainly concerned with the semantic domain where the respective contributions of the expressions in question overlap, which offers the common base for the discussion regarding Korean, one of the non-European languages and English and German, two European languages. They share the semantic domain ‘intensification’ regarding relevant subject-NP. They introduce an ordering distinguishing center and periphery. In contrast to head-bound-intensifiers (HBIs), however, NHBIs add self-involvement (directness of involvement) of subject-NP to the meaning of the relevant sentence. I adopt the proposals of Konig (1991), Primus (1992) and Siemund (2000) in the treatment of intensifiers as focus particles. However, I reject Konig (1991) that just NHBIs talre scope over a whole clause, Primus (1992) that NHBIs focus VPs, not NPs, and Siemund (2000) that NHBIs can be further devided into two groups, viz. NHBIs with exclusive readings and NHBIs with inclusive readings. Evidence for my position is presented mainly in the course of describing and analysing some syntactic properties and the meaning and use of NHBIs. I come to the conclusion that both the common meaning of intensifiers as focus particles and the common meaning of NHBIs of three languages can be represented by a simple logical formalism.

  • PDF

Comparison of prosodic characteristics by question type in left- and right-hemisphere-injured stroke patients (좌반구 손상과 우반구 손상 뇌졸중 환자의 의문문 유형에 따른 운율 특성 비교)

  • Yu, Youngmi;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.1-13
    • /
    • 2021
  • This study examined the characteristics of linguistic prosody in terms of cerebral lateralization in three groups of 9 healthy speakers and 14 speakers with a history of stroke (7 with left hemisphere damage (LHD), 7 with right hemisphere damage (RHD)). Specifically, prosodic characteristics related to speech rate, duration, pitch, and intensity were examined in three types of interrogative sentences (wh-questions, yes-no questions, alternative questions) with auditory perceptual evaluation. As a result, the statistically significant key variables showed flaws in production of the linguistic prosody in the speakers with LHD. The statistically significant variables were more insufficiently produced for wh-questions than for yes-no and alternative questions. This trend was particularly noticeable in variables related to pitch and speech rate. This result suggests that when Korean speakers process linguistic prosody, such as that of lexico-semantic and syntactic information in interrogative sentences, the left hemisphere seems to be superior to the right hemisphere.

An Analysis of Linguistic Features in Science Textbooks across Grade Levels: Focus on Text Cohesion (과학교과서의 학년 간 언어적 특성 분석 -텍스트 정합성을 중심으로-)

  • Ryu, Jisu;Jeon, Moongee
    • Journal of The Korean Association For Science Education
    • /
    • v.41 no.2
    • /
    • pp.71-82
    • /
    • 2021
  • Learning efficiency can be maximized by careful matching of text features to expected reader features (i.e., linguistic and cognitive abilities, and background knowledge). The present study aims to explore whether this systematic principle is reflected in the development of science textbooks. The current study examined science textbook texts on 20 measures provided by Auto-Kohesion, a Korean language analysis tool. In addition to surface-level features (basic counts, word-related measures, syntactic complexity measures) which have been commonly used in previous text analysis studies, the present study included cohesion-related features as well (noun overlap ratios, connectives, pronouns). The main findings demonstrate that the surface measures (e.g., word and sentence length, word frequency) overall increased in complexity with grade levels, whereas the majority of the other measures, particularly cohesion-related measures, did not systematically vary across grade levels. The current results suggest that students of lower grades are expected to experience learning difficulties and lowered motivation due to the challenging texts. Textbooks are also not likely to be suitable for students of higher grades to develop the ability to process difficulty level texts required for higher education. The current study suggests that various text-related features including cohesion-related measures need to be carefully considered in the process of textbook development.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

On the Sequences of Dialogue Acts and the Dialogue Flows-w.r.t. the appointment scheduling dialogues (대화행위의 연쇄관계와 대화흐름에 대하여 -[일정협의 대화] 중심으로)

  • 박혜은;이민행
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.27-34
    • /
    • 1999
  • The main purpose of this paper is to propose a general dialogue flow in 'the a appointment scheduling dialogues' in German using the concept of dialogue acts. A basic a assumption of this research is that dialogue acts contribute to the improvement of a translation system. They might be very useful to solve the problems that syntactic and semantic module could not resolve using contextual knowledge. The classification of the dialogue acts was conducted as a work of VERBMOBIL project and was based on real dialogues transcribed by experts. The real dialogues were analyzed in terms of the dialogue acts. We empirically analyzed the sequences of the dialogue acts not only in a series of dialogue turns but also in one dialogue turn. We attempted to analyZe the sequences in one dialogue turn additionally because the dialogue data used in this research showed some difference from the ones in other existing researches. By examining the sequences in dialogue acts. we proposed the dialogue flowchart in 'the a appointment scheduling dialogues' 'Based on the statistical analysis of the sequences of the most frequent dialogue acts. the dialogue flowcharts seem to represent' the a appointment scheduling dialogues' in general. A further research is required on c classification of dialogue acts which was a base for the analysis of dialogues. In order to e extract the most generalized model. we did not subcategorize each dialogue acts and used a limited number of items of dialogue acts. However. generally defined dialogue acts need to be defined more concretely and new dialogue acts for specific situations should be a added.

  • PDF

Component Analysis for Constructing an Emotion Ontology (감정 온톨로지의 구축을 위한 구성요소 분석)

  • Yoon, Ae-Sun;Kwon, Hyuk-Chul
    • Korean Journal of Cognitive Science
    • /
    • v.21 no.1
    • /
    • pp.157-175
    • /
    • 2010
  • Understanding dialogue participant's emotion is important as well as decoding the explicit message in human communication. It is well known that non-verbal elements are more suitable for conveying speaker's emotions than verbal elements. Written texts, however, contain a variety of linguistic units that express emotions. This study aims at analyzing components for constructing an emotion ontology, that provides us with numerous applications in Human Language Technology. A majority of the previous work in text-based emotion processing focused on the classification of emotions, the construction of a dictionary describing emotion, and the retrieval of those lexica in texts through keyword spotting and/or syntactic parsing techniques. The retrieved or computed emotions based on that process did not show good results in terms of accuracy. Thus, more sophisticate components analysis is proposed and the linguistic factors are introduced in this study. (1) 5 linguistic types of emotion expressions are differentiated in terms of target (verbal/non-verbal) and the method (expressive/descriptive/iconic). The correlations among them as well as their correlation with the non-verbal expressive type are also determined. This characteristic is expected to guarantees more adaptability to our ontology in multi-modal environments. (2) As emotion-related components, this study proposes 24 emotion types, the 5-scale intensity (-2~+2), and the 3-scale polarity (positive/negative/neutral) which can describe a variety of emotions in more detail and in standardized way. (3) We introduce verbal expression-related components, such as 'experiencer', 'description target', 'description method' and 'linguistic features', which can classify and tag appropriately verbal expressions of emotions. (4) Adopting the linguistic tag sets proposed by ISO and TEI and providing the mapping table between our classification of emotions and Plutchik's, our ontology can be easily employed for multilingual processing.

  • PDF

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.