• Title/Summary/Keyword: Lexical Analysis

Search Result 174, Processing Time 0.027 seconds

Comparison of Tools for Static Analysis: Lexical Analysis and Semantic Analysis (정적 분석 툴의 비교: Lexical Analysis and Semantic Analysis)

  • Jang, Seongsoo;Choi, Young-Hyun;Lim, Hun-Jung;Eom, Jung-Ho;Chung, Tai-Myoung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2010.11a
    • /
    • pp.1180-1182
    • /
    • 2010
  • 오늘날 소프트웨어를 대상으로 하는 악성코드로부터의 공격이 잦아지면서, 소프트웨어 개발 프로세스에서부터의 보안 취약성 점검이 중요시되고 있다. 본 논문에서는 소프트웨어 보안 취약점 분석 기법 중 하나인 정적 분석에 사용되는 도구들을 살펴보고 비교하여 그 구조 및 특성을 분석 파악한다. 그리하여 우리의 궁극적 목표인 향상된 성능의 새로운 정적 분석 툴 개발의 기반을 마련하고자 한다.

Some Issues on Causative Verbs in English

  • Cho, Sae-Youn
    • Language and Information
    • /
    • v.13 no.1
    • /
    • pp.77-92
    • /
    • 2009
  • Geis (1973) has provided various properties of the subjects and by + Gerund Phrase (GerP) in English causative constructions. Among them, the two main issues of Geis's analysis are as follows: unlike Lakoff (1965; 1966), the subject of English causative constructions, including causative-inchoative verbs such as liquefy, first of all, should be acts or events, not persons, and the by + GerP in the construction is a complement of the causative verbs. In addition to these issues, Geis has provided various data exhibiting other idiosyncratic properties and proposed some transformational rules such as the Agent Creation Rule and rule orderings to explain them. Against Geis's claim, I propose that English causative verbs require either Proper nouns or GerP subjects and that the by + GerP in the constructions as a Verbal Modifier needs Gerunds, whose understood Affective-agent subject is identical to the subject of causative verbs with respect to the semantic index value. This enables us to solve the two main issues. At the same time, the other properties Geis mentioned also can be easily accounted for in Head-driven Phrase Structure Grammar (HPSG) by positing a few lexical constraints. On this basis, it is shown that given the few lexical constraints and existing grammatical tools in HPSG, the constraint-based analysis proposed here gives a simpler explanation of the properties of English causative constructions provided by Geis without transformational rules and rule orderings.

  • PDF

Applying Lexical Semantics to Automatic Extraction of Temporal Expressions in Uyghur

  • Murat, Alim;Yusup, Azharjan;Iskandar, Zulkar;Yusup, Azragul;Abaydulla, Yusup
    • Journal of Information Processing Systems
    • /
    • v.14 no.4
    • /
    • pp.824-836
    • /
    • 2018
  • The automatic extraction of temporal information from written texts is a key component of question answering and summarization systems and its efficacy in those systems is very decisive if a temporal expression (TE) is successfully extracted. In this paper, three different approaches for TE extraction in Uyghur are developed and analyzed. A novel approach which uses lexical semantics as an additional information is also presented to extend classical approaches which are mainly based on morphology and syntax. We used a manually annotated news dataset labeled with TIMEX3 tags and generated three models with different feature combinations. The experimental results show that the best run achieved 0.87 for Precision, 0.89 for Recall, and 0.88 for F1-Measure in Uyghur TE extraction. From the analysis of the results, we concluded that the application of semantic knowledge resolves ambiguity problem at shallower language analysis and significantly aids the development of more efficient Uyghur TE extraction system.

Predicting CEFR Levels in L2 Oral Speech, Based on Lexical and Syntactic Complexity

  • Hu, Xiaolin
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.35-45
    • /
    • 2021
  • With the wide spread of the Common European Framework of Reference (CEFR) scales, many studies attempt to apply them in routine teaching and rater training, while more evidence regarding criterial features at different CEFR levels are still urgently needed. The current study aims to explore complexity features that distinguish and predict CEFR proficiency levels in oral performance. Using a quantitative/corpus-based approach, this research analyzed lexical and syntactic complexity features over 80 transcriptions (includes A1, A2, B1 CEFR levels, and native speakers), based on an interview test, Standard Speaking Test (SST). ANOVA and correlation analysis were conducted to exclude insignificant complexity indices before the discriminant analysis. In the result, distinctive differences in complexity between CEFR speaking levels were observed, and with a combination of six major complexity features as predictors, 78.8% of the oral transcriptions were classified into the appropriate CEFR proficiency levels. It further confirms the possibility of predicting CEFR level of L2 learners based on their objective linguistic features. This study can be helpful as an empirical reference in language pedagogy, especially for L2 learners' self-assessment and teachers' prediction of students' proficiency levels. Also, it offers implications for the validation of the rating criteria, and improvement of rating system.

Vocabulary Analysis of Listening and Reading Texts in 2020 EBS-linked Textbooks and CSAT (2020년 EBS 연계교재와 대학수학능력시험의 듣기 및 읽기 어휘 분석)

  • Kang, Dongho
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.10
    • /
    • pp.679-687
    • /
    • 2020
  • The present study aims to investigate lexical coverage of BNC (British National Corpus) word lists and 2015 Basic Vocabulary of Ministry of Education in 2020 EBS-linked textbooks and CSAT. For the data analysis, AntWordProfiler was used to find lexical coverage and frequency. The findings showed that Students can understand 95% of the tokens with a vocabulary of BNC 3,000 and 4,000 word-families in 2020 EBS-linked listening and reading books respectively. 98% can be understood with 4,000 word-families in the EBS-linked listening book while the same lexical coverage can be covered with 8,000 word-families in the EBS-linked reading textbook. By the way, 95% of the tokens can be understood with 2,000 and 4,000 word-families in 2020 CSAT listening and reading tests respectively, while 98% requires 4,000 and 7,000 word-families in the 2020 listening and reading tests respectively. In summary, students should understand more words in 2020 EBS-linked textbooks than in 2020 CSAT tests confirming Kim's (2016) findings. In summary, students should understand more words in 2020 EBS-linked textbooks than in 2020 CSAT tests.

Emotion Analysis Using a Bidirectional LSTM for Word Sense Disambiguation (양방향 LSTM을 적용한 단어의미 중의성 해소 감정분석)

  • Ki, Ho-Yeon;Shin, Kyung-shik
    • The Journal of Bigdata
    • /
    • v.5 no.1
    • /
    • pp.197-208
    • /
    • 2020
  • Lexical ambiguity means that a word can be interpreted as two or more meanings, such as homonym and polysemy, and there are many cases of word sense ambiguation in words expressing emotions. In terms of projecting human psychology, these words convey specific and rich contexts, resulting in lexical ambiguity. In this study, we propose an emotional classification model that disambiguate word sense using bidirectional LSTM. It is based on the assumption that if the information of the surrounding context is fully reflected, the problem of lexical ambiguity can be solved and the emotions that the sentence wants to express can be expressed as one. Bidirectional LSTM is an algorithm that is frequently used in the field of natural language processing research requiring contextual information and is also intended to be used in this study to learn context. GloVe embedding is used as the embedding layer of this research model, and the performance of this model was verified compared to the model applied with LSTM and RNN algorithms. Such a framework could contribute to various fields, including marketing, which could connect the emotions of SNS users to their desire for consumption.

The Relationship between Lexical Retrieval and Coverbal Gestures (어휘인출과 구어동반 제스처의 관계)

  • Ha, Ji-Wan;Sim, Hyun-Sub
    • Korean Journal of Cognitive Science
    • /
    • v.22 no.2
    • /
    • pp.123-143
    • /
    • 2011
  • At what point in the process of speech production are gestures involved? According to the Lexical Retrieval Hypothesis, gestures are involved in the lexicalization in the formulating stage. According to the Information Packaging Hypothesis, gestures are involved in the conceptual planning of massages in the conceptualizing stage. We investigated these hypotheses, using the game situation in a TV program that induced the players to involve in both lexicalization and conceptualization simultaneously. The transcription of the verbal utterances was augmented with all arm and hand gestures produced by the players. Coverbal gestures were classified into two types of gestures: lexical gestures and motor gestures. As a result, concrete words elicited lexical gestures significantly more frequently than abstract words, and abstract words elicited motor gestures significantly more frequently than concrete words. The difficulty of conceptualization in concrete words was significantly correlated with the amount of lexical gestures. However, the amount of words and the word frequency were not correlated with the amount of both gestures. This result supports the Information Packaging Hypothesis. Most of all, the importance of motor gestures was inferred from the result that abstract words elicited motor gestures more frequently rather than concrete words. Motor gestures, which have been considered as unrelated to verbal production, were excluded from analysis in many gestural studies. This study revealed motor gestures seemed to be connected to the abstract conceptualization.

  • PDF

Restricting Answer Candidates Based on Taxonomic Relatedness of Integrated Lexical Knowledge Base in Question Answering

  • Heo, Jeong;Lee, Hyung-Jik;Wang, Ji-Hyun;Bae, Yong-Jin;Kim, Hyun-Ki;Ock, Cheol-Young
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.191-201
    • /
    • 2017
  • This paper proposes an approach using taxonomic relatedness for answer-type recognition and type coercion in a question-answering system. We introduce a question analysis method for a lexical answer type (LAT) and semantic answer type (SAT) and describe the construction of a taxonomy linking them. We also analyze the effectiveness of type coercion based on the taxonomic relatedness of both ATs. Compared with the rule-based approach of IBM's Watson, our LAT detector, which combines rule-based and machine-learning approaches, achieves an 11.04% recall improvement without a sharp decline in precision. Our SAT classifier with a relatedness-based validation method achieves a precision of 73.55%. For type coercion using the taxonomic relatedness between both ATs and answer candidates, we construct an answer-type taxonomy that has a semantic relationship between the two ATs. In this paper, we introduce how to link heterogeneous lexical knowledge bases. We propose three strategies for type coercion based on the relatedness between the two ATs and answer candidates in this taxonomy. Finally, we demonstrate that this combination of individual type coercion creates a synergistic effect.

Cross-Enrichment of the Heterogenous Ontologies Through Mapping Their Conceptual Structures: the Case of Sejong Semantic Classes and KorLexNoun 1.5 (이종 개념체계의 상호보완방안 연구 - 세종의미부류와 KorLexNoun 1.5 의 사상을 중심으로)

  • Bae, Sun-Mee;Yoon, Ae-Sun
    • Language and Information
    • /
    • v.14 no.1
    • /
    • pp.165-196
    • /
    • 2010
  • The primary goal of this paper is to propose methods of enriching two heterogeneous ontologies: Sejong Semantic Classes (SJSC) and KorLexNoun 1.5 (KLN). In order to achieve this goal, this study introduces the pros and cons of two ontologies, and analyzes the error patterns found during the fine-grained manual mapping processes between them. Error patterns can be classified into four types: (1) structural defectives involved in node branching, (2) errors in assigning the semantic classes, (3) deficiency in providing linguistic information, and (4) lack of the lexical units representing specific concepts. According to these error patterns, we propose different solutions in order to correct the node branching defectives and the semantic class assignment, to complement the deficiency of linguistic information, and to increase the number of lexical units suitably allotted to their corresponding concepts. Using the results of this study, we can obtain more enriched ontologies by correcting the defects and errors in each ontology, which will lead to the enhancement of practicality for syntactic and semantic analysis.

  • PDF