• Title/Summary/Keyword: Natural Language Analysis

Search Result 513, Processing Time 0.028 seconds

A Machine Learning Approach to Korean Language Stemming

  • Cho, Se-hyeong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.6
    • /
    • pp.549-557
    • /
    • 2001
  • Morphological analysis and POS tagging require a dictionary for the language at hand . In this fashion though it is impossible to analyze a language a dictionary. We also have difficulty if significant portion of the vocabulary is new or unknown . This paper explores the possibility of learning morphology of an agglutinative language. in particular Korean language, without any prior lexical knowledge of the language. We use unsupervised learning in that there is no instructor to guide the outcome of the learner, nor any tagged corpus. Here are the main characteristics of the approach: First. we use only raw corpus without any tags attached or any dictionary. Second, unlike many heuristics that are theoretically ungrounded, this method is based on statistical methods , which are widely accepted. The method is currently applied only to Korean language but since it is essentially language-neutral it can easily be adapted to other agglutinative languages.

  • PDF

Comparison of Sentiment Analysis from Large Twitter Datasets by Naïve Bayes and Natural Language Processing Methods

  • Back, Bong-Hyun;Ha, Il-Kyu
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.4
    • /
    • pp.239-245
    • /
    • 2019
  • Recently, effort to obtain various information from the vast amount of social network services (SNS) big data generated in daily life has expanded. SNS big data comprise sentences classified as unstructured data, which complicates data processing. As the amount of processing increases, a rapid processing technique is required to extract valuable information from SNS big data. We herein propose a system that can extract human sentiment information from vast amounts of SNS unstructured big data using the naïve Bayes algorithm and natural language processing (NLP). Furthermore, we analyze the effectiveness of the proposed method through various experiments. Based on sentiment accuracy analysis, experimental results showed that the machine learning method using the naïve Bayes algorithm afforded a 63.5% accuracy, which was lower than that yielded by the NLP method. However, based on data processing speed analysis, the machine learning method by the naïve Bayes algorithm demonstrated a processing performance that was approximately 5.4 times higher than that by the NLP method.

Analytical Tools for Ideological Texts in Critical Reading Instruction

  • Lee, Jong-Hee
    • English Language & Literature Teaching
    • /
    • v.10 no.3
    • /
    • pp.89-112
    • /
    • 2004
  • This article examines the ways in which language can be exploited in the manipulation of the reader's interpretation of a text to make him/her take certain lines of thought according to the writer's persuasive intents. Such functions of language provide valid foundations to support the teaching of critical reading skills and to explore an adequate approach to discourse analysis. A pilot study was conducted to find out the extent to which the reader can be coaxed into thinking in some fashions guided by specific linguistic devices employed for ideological texts. Forty-seven subjects divided into two groups (humanities majors and natural science majors at undergraduate level) joined the two-fold questionnaire surveys intended to look at their critical reading abilities. The empirical results indicate that college students whose majors are humanities were more inclined to take a holistic approach in processing commercial advertisement texts and their abilities for critical interpretation appeared to be lower than those of the subjects whose majors are natural sciences, who showed a relatively high tendency to take an analytical approach in decoding the textual facts. As a consequence, pedagogic implications for increasing critical reading abilities have resulted in a set of analytical procedures concerning ideological texts which is linked with instructional guidelines to emphasize the importance of the reader's logical and analytical reasoning power, entirely accepted as a general prerequisite for cracking the covert language gambits.

  • PDF

The Modification Scope Analysis of the Embedded Sentences in Korean and Japanese Machine Translation (한일 기계번역을 위한 보문의 수식 Scope 해석)

  • Lee, Soo-Hyun
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.346-350
    • /
    • 1996
  • 한일 양언어의 복합문은 여러가지의 통어 현상을 가지며, 주어, 목적어 등의 생략 현상으로 문장의 표층상에 나타나지 않는 것이 있기 때문에 수식구조의 처리가 복잡해지고, 구문해석에 있어서 애매성의 요인이 된다. 따라서, 본 논문에서는 DPN에 의하여 한국어와 일본어의 수식 scope를 해석하는 방법에 대하여 설명한다. 먼저, 한일 양언어의 공통점과 차이점을 찾아내어, 한국어와 일본어의 보문을 표현형식으로 나타내고, 동사의 격정보로부터 DPN을 구성하여 DPN상에서 보문의 수식 Scope를 해석하는 방법에 대해서 설명한다.

  • PDF

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

Effective Requirement Analysis Method based on Linguistic & Semantic Textual Analysis (언어학 및 의미적 문맥 분석을 통한 효율적인 요구사항 분석 방법)

  • Park, Bo-Kyung;Yi, Geun-Sang;Kim, R. Young-Chul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.97-103
    • /
    • 2017
  • For high quality of software, it should be necessary for defining and analyzing the exact requirements at the early stage of software development. But readability and understandability of most natural language requirements are inaccurate and difficult for identifying use cases. The requirements are duplicated for objects or temrs with the same meaning. To solve this problem, it should need an effective way of requirement analysis based on linguistic and semantic textual analysis. In this paper, we propose to improve a semantic analysis method adopted with a linguist Fillmore's linguistic mechanism. This method may expect to analyze easily readable and exactly understandable requirements specifications through modeling the goal oriented use cases with natural language based requirements.

Transformer-based Language Recognition Technique for Big Data (빅데이터를 위한 트랜스포머 기반의 언어 인식 기법)

  • Hwang, Chi-Gon;Yoon, Chang-Pyo;Lee, Soo-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.267-268
    • /
    • 2022
  • Recently, big data analysis can use various techniques according to the development of machine learning. Big data collected in reality lacks an automated refining technique for the same or similar terms based on semantic analysis of the relationship between words. Big data is usually in the form of sentences, and morphological analysis or understanding of the sentences is required. Accordingly, NLP, a technique for analyzing natural language, can understand the relationship of words and sentences. In this paper, we study the advantages and disadvantages of Transformers and Reformers, which are techniques that complement the disadvantages of RNN, which is a time series approach to big data.

  • PDF

Phrase-Chunk Level Hierarchical Attention Networks for Arabic Sentiment Analysis

  • Abdelmawgoud M. Meabed;Sherif Mahdy Abdou;Mervat Hassan Gheith
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.120-128
    • /
    • 2023
  • In this work, we have presented ATSA, a hierarchical attention deep learning model for Arabic sentiment analysis. ATSA was proposed by addressing several challenges and limitations that arise when applying the classical models to perform opinion mining in Arabic. Arabic-specific challenges including the morphological complexity and language sparsity were addressed by modeling semantic composition at the Arabic morphological analysis after performing tokenization. ATSA proposed to perform phrase-chunks sentiment embedding to provide a broader set of features that cover syntactic, semantic, and sentiment information. We used phrase structure parser to generate syntactic parse trees that are used as a reference for ATSA. This allowed modeling semantic and sentiment composition following the natural order in which words and phrase-chunks are combined in a sentence. The proposed model was evaluated on three Arabic corpora that correspond to different genres (newswire, online comments, and tweets) and different writing styles (MSA and dialectal Arabic). Experiments showed that each of the proposed contributions in ATSA was able to achieve significant improvement. The combination of all contributions, which makes up for the complete ATSA model, was able to improve the classification accuracy by 3% and 2% on Tweets and Hotel reviews datasets, respectively, compared to the existing models.

A Quantitative Linguistic Study for the Appreciation of the Lexical Richness (어휘 풍부성 평가에 대한 계량언어학적 연구 (프랑스어 텍스트를 중심으로))

  • Bae, Hee-Sook
    • Speech Sciences
    • /
    • v.7 no.3
    • /
    • pp.139-149
    • /
    • 2000
  • Studying language by the quantitative linguistic method is not a recent development. Lately however, the interest in the quantitative linguistics has increased according to the demand on communication between human and human or between human and machine. We are required to transfer the system of the natural language onto machine. This requires the study of quantitative linguistics because we are unable to seize the characters of the tiny linguistic units and their structure in an intuitive way. In fact, the quantitative linguistics treats the internal structure of the language by the relation between the linguitic units and their quantitative characters. It is natural then that there is this growing interest in quantitative linguistics. In addition, Korean linguists take interest in the quantitative linguistics, although quantitative linguistics in Korea is not advanced by the level of the statistical analysis. Therefore, this present study shows how statistics can be applied in the field of linguistics through the two texts written in French: Lovers of the Subway and Our life's A. B. C.

  • PDF

A Study on the Construction of Financial-Specific Language Model Applicable to the Financial Institutions (금융권에 적용 가능한 금융특화언어모델 구축방안에 관한 연구)

  • Jae Kwon Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.79-87
    • /
    • 2024
  • Recently, the importance of pre-trained language models (PLM) has been emphasized for natural language processing (NLP) such as text classification, sentiment analysis, and question answering. Korean PLM shows high performance in NLP in general-purpose domains, but is weak in domains such as finance, medicine, and law. The main goal of this study is to propose a language model learning process and method to build a financial-specific language model that shows good performance not only in the financial domain but also in general-purpose domains. The five steps of the financial-specific language model are (1) financial data collection and preprocessing, (2) selection of model architecture such as PLM or foundation model, (3) domain data learning and instruction tuning, (4) model verification and evaluation, and (5) model deployment and utilization. Through this, a method for constructing pre-learning data that takes advantage of the characteristics of the financial domain and an efficient LLM training method, adaptive learning and instruction tuning techniques, were presented.