• Title/Summary/Keyword: Artificial grammar

Search Result 11, Processing Time 0.031 seconds

A Study on the Use of Artificial Intelligence Chatbots for Improving English Grammar Skills (영어 문법 실력 향상을 위한 인공지능 챗봇 활용에 관한 연구)

  • Kim, Na-Young
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.37-46
    • /
    • 2019
  • The purpose of this study is to explore the effects of the use of artificial intelligence chatbots on improving Korean college students' English grammar skills. 70 undergraduate students participated in the present study. They were taking a General English class offered by a university in Korea. There were two groups in this study. Participants in the chatbot group consisted of 36 students while those in the human group were 34. Over 16 weeks, the chatbot group engaged in ten chat sessions with a chatbot while the human group had a chat with a human chat partner. Both pre- and post-tests were performed to examine changes in the participants' grammar skills over time. To compare the improvement between the two groups, an independent t-test was then run. Main findings are as follows: First, participants in both groups significantly improved their English grammar skills, indicating the beneficial effects of engaging in chat. Also, there was a statistically significant difference in the improvement between the chatbot and human groups, indicating the superior effects of the chatbot use. This study confirmed the improved grammar skills by the participants in the chatbot group, comparison with those in the human group. Based on these findings, suggestions for the future chatbot study are discussed.

Implicit Learning with Artificial Grammar : Simulations using EPAM IV (인공 문법을 사용한 암묵 학습: EPAM IV를 사용한 모사)

  • 정혜선
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.1-9
    • /
    • 2003
  • In implicit learning tasks, human participants learn grammatical letter strings better than random letter strings. After learning grammatical letter strings, participants were able to judge the grammaticality of new letter strings that they have never seen before. EPAM (Elementary Perceiver and Memorizer) IV, a rote learner without any rule abstraction mechanism, was used to simulate these results. The results showed that EPAM IV with a within-item chunking function was able to learn grammatical letter strings better than random letter strings and discriminate grammatical letter strings from non-grammatical letter strings. The success of EPAM IV in simulating human performance strongly indicated that recognition memory based on chunking plays a critical role in implicit learning.

  • PDF

DG-based SPO tuple recognition using self-attention M-Bi-LSTM

  • Jung, Joon-young
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.438-449
    • /
    • 2022
  • This study proposes a dependency grammar-based self-attention multilayered bidirectional long short-term memory (DG-M-Bi-LSTM) model for subject-predicate-object (SPO) tuple recognition from natural language (NL) sentences. To add recent knowledge to the knowledge base autonomously, it is essential to extract knowledge from numerous NL data. Therefore, this study proposes a high-accuracy SPO tuple recognition model that requires a small amount of learning data to extract knowledge from NL sentences. The accuracy of SPO tuple recognition using DG-M-Bi-LSTM is compared with that using NL-based self-attention multilayered bidirectional LSTM, DG-based bidirectional encoder representations from transformers (BERT), and NL-based BERT to evaluate its effectiveness. The DG-M-Bi-LSTM model achieves the best results in terms of recognition accuracy for extracting SPO tuples from NL sentences even if it has fewer deep neural network (DNN) parameters than BERT. In particular, its accuracy is better than that of BERT when the learning data are limited. Additionally, its pretrained DNN parameters can be applied to other domains because it learns the structural relations in NL sentences.

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • v.41 no.6
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.

A Quality Comparison of English Translations of Korean Literature between Human Translation and Post-Editing

  • LEE, IL-JAE
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.165-171
    • /
    • 2018
  • As the artificial intelligence (AI) plays a crucial role in machine translation (MT) which has loomed large as a new translation paradigm, concerns have also arisen if MT can produce a quality product as human translation (HT) can. In fact, several MT experimental studies report cases in which the MT product called post-editing (PE) as equally as HT or often superior ([1],[2],[6]). As motivated from those studies on translation quality between HT and PE, this study set up an experimental situation in which Korean literature was translated into English, comparatively, by 3 translators and 3 post-editors. Afterwards, a group of 3 other Koreans checked for accuracy of HT and PE; a group of 3 English native speakers scored for fluency of HT and PE. The findings are (1) HT took the translation time, at least, twice longer than PE. (2) Both HT and PE produced similar error types, and Mistranslation and Omission were the major errors for accuracy and Grammar for fluency. (3) HT turned to be inferior to PE for both accuracy and fluency.

A study on rethinking EDA in digital transformation era (DX 전환 환경에서 EDA에 대한 재고찰)

  • Seoung-gon Ko
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.87-102
    • /
    • 2024
  • Digital transformation refers to the process by which a company or organization changes or innovates its existing business model or sales activities using digital technology. This requires the use of various digital technologies - cloud computing, IoT, artificial intelligence, etc. - to strengthen competitiveness in the market, improve customer experience, and discover new businesses. In addition, in order to derive knowledge and insight about the market, customers, and production environment, it is necessary to select the right data, preprocess the data to an analyzable state, and establish the right process for systematic analysis suitable for the purpose. The usefulness of such digital data is determined by the importance of pre-processing and the correct application of exploratory data analysis (EDA), which is useful for information and hypothesis exploration and visualization of knowledge and insights. In this paper, we reexamine the philosophy and basic concepts of EDA and discuss key visualization information, information expression methods based on the grammar of graphics, and the ACCENT principle, which is the final visualization review standard, for effective visualization.

Sentence Unit De-noising Training Method for Korean Grammar Error Correction Model (한국어 문법 오류 교정 모델을 위한 문장 단위 디노이징 학습법)

  • Hoonrae Kim;Yunsu Kim;Gary Geunbae Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.507-511
    • /
    • 2022
  • 문법 교정 모델은 입력된 텍스트에 존재하는 문법 오류를 탐지하여 이를 문법적으로 옳게 고치는 작업을 수행하며, 학습자에게 더 나은 학습 경험을 제공하기 위해 높은 정확도와 재현율을 필요로 한다. 이를 위해 최근 연구에서는 문단 단위 사전 학습을 완료한 모델을 맞춤법 교정 데이터셋으로 미세 조정하여 사용한다. 하지만 본 연구에서는 기존 사전 학습 방법이 문법 교정에 적합하지 않다고 판단하여 문단 단위 데이터셋을 문장 단위로 나눈 뒤 각 문장에 G2P 노이즈와 편집거리 기반 노이즈를 추가한 데이터셋을 제작하였다. 그리고 문단 단위 사전 학습한 모델에 해당 데이터셋으로 문장 단위 디노이징 사전 학습을 추가했고, 그 결과 성능이 향상되었다. 노이즈 없이 문장 단위로 분할된 데이터셋을 사용하여 디노이징 사전 학습한 모델을 통해 문장 단위 분할의 효과를 검증하고자 했고, 디노이징 사전 학습하지 않은 기존 모델보다 성능이 향상되는 것을 확인하였다. 또한 둘 중 하나의 노이즈만을 사용하여 디노이징 사전 학습한 두 모델의 성능이 큰 차이를 보이지 않는 것을 통해 인공적인 무작위 편집거리 노이즈만을 사용한 모델이 언어학적 지식이 필요한 G2P 노이즈만을 사용한 모델에 필적하는 성능을 보일 수 있다는 것을 확인할 수 있었다.

  • PDF

An intelligent eddy current signal evaluation system to automate the non-destructive testing of steam generator tubes in nuclear power plant

  • Kang, Soon-Ju;Ryu, Chan-Ho;Choi, In-Seon;Kim, Young-Ill;Kim, kill-Yoo;Hur, Young-Hwan;Choi, Seong-Soo;Choi, Baeng-Jae;Woo, Hee-Gon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.74-78
    • /
    • 1992
  • This paper describes an intelligent system to automatic evaluation of eddy current(EC) signal for Inspection of steam generator(SG) tubes in nuclear power plant. Some features of the intelligent system design in the proposed system are : (1) separation of representation scheme ,or event capturing knowledge in EC signal and for structural inspection knowledge in SG tubes inspection; (2) each representation scheme is implemented in different methods, one is syntactic pattern grammar and the other is rule based production. This intelligent system also includes an data base system and an user interface system to support integration of the hybrid knowledge processing methods. The intelligent system based on the proposed concept is useful in simplifying the knowledge elicitation process of the rule based production system, and in increasing the performance in real time signal inspection application.

  • PDF

Korean Contextual Information Extraction System using BERT and Knowledge Graph (BERT와 지식 그래프를 이용한 한국어 문맥 정보 추출 시스템)

  • Yoo, SoYeop;Jeong, OkRan
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Along with the rapid development of artificial intelligence technology, natural language processing, which deals with human language, is also actively studied. In particular, BERT, a language model recently proposed by Google, has been performing well in many areas of natural language processing by providing pre-trained model using a large number of corpus. Although BERT supports multilingual model, we should use the pre-trained model using large amounts of Korean corpus because there are limitations when we apply the original pre-trained BERT model directly to Korean. Also, text contains not only vocabulary, grammar, but contextual meanings such as the relation between the front and the rear, and situation. In the existing natural language processing field, research has been conducted mainly on vocabulary or grammatical meaning. Accurate identification of contextual information embedded in text plays an important role in understanding context. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn context easily from computer. In this paper, we propose a system to extract Korean contextual information using pre-trained BERT model with Korean language corpus and knowledge graph. We build models that can extract person, relationship, emotion, space, and time information that is important in the text and validate the proposed system through experiments.

A study on detective story authors' style differentiation and style structure based on Text Mining (텍스트 마이닝 기법을 활용한 고전 추리 소설 작가 간 문체적 차이와 문체 구조에 대한 연구)

  • Moon, Seok Hyung;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.89-115
    • /
    • 2019
  • This study was conducted to present the stylistic differences between Arthur Conan Doyle and Agatha Christie, famous as writers of classical mystery novels, through data analysis, and further to present the analytical methodology of the study of style based on text mining. The reason why we chose mystery novels for our research is because the unique devices that exist in classical mystery novels have strong stylistic characteristics, and furthermore, by choosing Arthur Conan Doyle and Agatha Christie, who are also famous to the general reader, as subjects of analysis, so that people who are unfamiliar with the research can be familiar with them. The primary objective of this study is to identify how the differences exist within the text and to interpret the effects of these differences on the reader. Accordingly, in addition to events and characters, which are key elements of mystery novels, the writer's grammatical style of writing was defined in style and attempted to analyze it. Two series and four books were selected by each writer, and the text was divided into sentences to secure data. After measuring and granting the emotional score according to each sentence, the emotions of the page progress were visualized as a graph, and the trend of the event progress in the novel was identified under eight themes by applying Topic modeling according to the page. By organizing co-occurrence matrices and performing network analysis, we were able to visually see changes in relationships between people as events progressed. In addition, the entire sentence was divided into a grammatical system based on a total of six types of writing style to identify differences between writers and between works. This enabled us to identify not only the general grammatical writing style of the author, but also the inherent stylistic characteristics in their unconsciousness, and to interpret the effects of these characteristics on the reader. This series of research processes can help to understand the context of the entire text based on a defined understanding of the style, and furthermore, by integrating previously individually conducted stylistic studies. This prior understanding can also contribute to discovering and clarifying the existence of text in unstructured data, including online text. This could help enable more accurate recognition of emotions and delivery of commands on an interactive artificial intelligence platform that currently converts voice into natural language. In the face of increasing attempts to analyze online texts, including New Media, in many ways and discover social phenomena and managerial values, it is expected to contribute to more meaningful online text analysis and semantic interpretation through the links to these studies. However, the fact that the analysis data used in this study are two or four books by author can be considered as a limitation in that the data analysis was not attempted in sufficient quantities. The application of the writing characteristics applied to the Korean text even though it was an English text also could be limitation. The more diverse stylistic characteristics were limited to six, and the less likely interpretation was also considered as a limitation. In addition, it is also regrettable that the research was conducted by analyzing classical mystery novels rather than text that is commonly used today, and that various classical mystery novel writers were not compared. Subsequent research will attempt to increase the diversity of interpretations by taking into account a wider variety of grammatical systems and stylistic structures and will also be applied to the current frequently used online text analysis to assess the potential for interpretation. It is expected that this will enable the interpretation and definition of the specific structure of the style and that various usability can be considered.