• 제목/요약/키워드: Syntactic-Semantic Analysis

Search Result 107, Processing Time 0.025 seconds

Development of an Organism-specific Protein Interaction Database with Supplementary Data from the Web Sources (다양한 웹 데이터를 이용한 특정 유기체의 단백질 상호작용 데이터베이스 개발)

  • Hwang, Doo-Sung
    • The KIPS Transactions:PartD
    • /
    • v.9D no.6
    • /
    • pp.1091-1096
    • /
    • 2002
  • This paper presents the development of a protein interaction database. The developed system is characterized as follows. First, the proposed system not only maintains interaction data collected by an experiment, but also the genomic information of the protein data. Secondly, the system can extract details on interacting proteins through the developed wrappers. Thirdly, the system is based on wrapper-based system in order to extract the biologically meaningful data from various web sources and integrate them into a relational database. The system inherits a layered-modular architecture by introducing a wrapper-mediator approach in order to solve the syntactic and semantic heterogeneity among multiple data sources. Currently the system has wrapped the relevant data for about 40% of about 11,500 proteins on average from various accessible sources. A wrapper-mediator approach makes a protein interaction data comprehensive and useful with support of data interoperability and integration. The developing database will be useful for mining further knowledge and analysis of human life in proteomics studies.

Terminology Recognition System based on Machine Learning for Scientific Document Analysis (과학 기술 문헌 분석을 위한 기계학습 기반 범용 전문용어 인식 시스템)

  • Choi, Yun-Soo;Song, Sa-Kwang;Chun, Hong-Woo;Jeong, Chang-Hoo;Choi, Sung-Pil
    • The KIPS Transactions:PartD
    • /
    • v.18D no.5
    • /
    • pp.329-338
    • /
    • 2011
  • Terminology recognition system which is a preceding research for text mining, information extraction, information retrieval, semantic web, and question-answering has been intensively studied in limited range of domains, especially in bio-medical domain. We propose a domain independent terminology recognition system based on machine learning method using dictionary, syntactic features, and Web search results, since the previous works revealed limitation on applying their approaches to general domain because their resources were domain specific. We achieved F-score 80.8 and 6.5% improvement after comparing the proposed approach with the related approach, C-value, which has been widely used and is based on local domain frequencies. In the second experiment with various combinations of unithood features, the method combined with NGD(Normalized Google Distance) showed the best performance of 81.8 on F-score. We applied three machine learning methods such as Logistic regression, C4.5, and SVMs, and got the best score from the decision tree method, C4.5.

Detecting Inconsistent Code Identifiers (코드 비 일관적 식별자 검출 기법)

  • Lee, Sungnam;Kim, Suntae;Park, Sooyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.5
    • /
    • pp.319-328
    • /
    • 2013
  • Software maintainers try to comprehend software source code by intensively using source code identifiers. Thus, use of inconsistent identifiers throughout entire source code causes to increase cost of software maintenance. Although participants can adopt peer reviews to handle this problem, it might be impossible to go through entire source code if the volume of code is huge. This paper introduces an approach to automatically detecting inconsistent identifiers of Java source code. This approach consists of tokenizing and POS tagging all identifiers in the source code, classifying syntactic and semantic similar terms, and finally detecting inconsistent identifiers by applying proposed rules. In addition, we have developed tool support, named CodeAmigo, to support the proposed approach. We applied it to two popular Java based open source projects in order to show feasibility of the approach by computing precision.

Comparison of prosodic characteristics by question type in left- and right-hemisphere-injured stroke patients (좌반구 손상과 우반구 손상 뇌졸중 환자의 의문문 유형에 따른 운율 특성 비교)

  • Yu, Youngmi;Seong, Cheoljae
    • Phonetics and Speech Sciences
    • /
    • v.13 no.3
    • /
    • pp.1-13
    • /
    • 2021
  • This study examined the characteristics of linguistic prosody in terms of cerebral lateralization in three groups of 9 healthy speakers and 14 speakers with a history of stroke (7 with left hemisphere damage (LHD), 7 with right hemisphere damage (RHD)). Specifically, prosodic characteristics related to speech rate, duration, pitch, and intensity were examined in three types of interrogative sentences (wh-questions, yes-no questions, alternative questions) with auditory perceptual evaluation. As a result, the statistically significant key variables showed flaws in production of the linguistic prosody in the speakers with LHD. The statistically significant variables were more insufficiently produced for wh-questions than for yes-no and alternative questions. This trend was particularly noticeable in variables related to pitch and speech rate. This result suggests that when Korean speakers process linguistic prosody, such as that of lexico-semantic and syntactic information in interrogative sentences, the left hemisphere seems to be superior to the right hemisphere.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

On the Sequences of Dialogue Acts and the Dialogue Flows-w.r.t. the appointment scheduling dialogues (대화행위의 연쇄관계와 대화흐름에 대하여 -[일정협의 대화] 중심으로)

  • 박혜은;이민행
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.27-34
    • /
    • 1999
  • The main purpose of this paper is to propose a general dialogue flow in 'the a appointment scheduling dialogues' in German using the concept of dialogue acts. A basic a assumption of this research is that dialogue acts contribute to the improvement of a translation system. They might be very useful to solve the problems that syntactic and semantic module could not resolve using contextual knowledge. The classification of the dialogue acts was conducted as a work of VERBMOBIL project and was based on real dialogues transcribed by experts. The real dialogues were analyzed in terms of the dialogue acts. We empirically analyzed the sequences of the dialogue acts not only in a series of dialogue turns but also in one dialogue turn. We attempted to analyZe the sequences in one dialogue turn additionally because the dialogue data used in this research showed some difference from the ones in other existing researches. By examining the sequences in dialogue acts. we proposed the dialogue flowchart in 'the a appointment scheduling dialogues' 'Based on the statistical analysis of the sequences of the most frequent dialogue acts. the dialogue flowcharts seem to represent' the a appointment scheduling dialogues' in general. A further research is required on c classification of dialogue acts which was a base for the analysis of dialogues. In order to e extract the most generalized model. we did not subcategorize each dialogue acts and used a limited number of items of dialogue acts. However. generally defined dialogue acts need to be defined more concretely and new dialogue acts for specific situations should be a added.

  • PDF

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.