• Title/Summary/Keyword: Textual language

Search Result 104, Processing Time 0.032 seconds

Real-Time Early Risk Detection in Textual Data Streams for Enhanced Online Safety (온라인 범죄 예방을 위한 실시간 조기 위험 감지 시스템)

  • Jinmyeong An;Geun-Bae Lee
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.525-530
    • /
    • 2023
  • 최근 소셜 네트워크 서비스(SNS) 및 모바일 서비스가 증가함에 따라 사용자들은 다양한 종류의 위험에 직면하고 있다. 특히 온라인 그루밍과 온라인 루머 같은 위험은 한 개인의 삶을 완전히 망가뜨릴 수 있을 정도로 심각한 문제로 자리 잡았다. 그러나 많은 경우 이러한 위험들을 판단하는 시점은 사건이 일어난 이후이고, 주로 법적인 증거채택을 위한 위험성 판별이 대다수이다. 따라서 본 논문은 이러한 문제를 사전에 예방하는 것에 초점을 맞추었고, 계속적으로 발생하는 대화와 같은 event를 실시간으로 감지하고, 위험을 사전에 탐지할 수 있는 Real-Time Early Risk Detection(RERD) 문제를 정의하고자 한다. 온라인 그루밍과 루머를 실시간 조기 위험 감지(RERD) 문제로 정의하고 해당 데이터셋과 평가지표를 소개한다. 또한 RERD 문제를 정확하고 신속하게 해결할 수 있는 강화학습 기반 새로운 방법론인 RT-ERD 모델을 소개한다. 해당 방법론은 RERD 문제를 이루고 있는 온라인 그루밍, 루머 도메인에 대한 실험에서 각각 기존의 모델들을 뛰어넘는 state-of-the-art의 성능을 달성하였다.

  • PDF

Korean EFL Students' Reader Responses on an Expository Text and a Narrative Text

  • Lee, Jisun
    • English Language & Literature Teaching
    • /
    • v.17 no.3
    • /
    • pp.161-175
    • /
    • 2011
  • This paper examines Korean EFL high school students' reader responses on an expository text and a narrative text with the same topic. The purpose of the study is to investigate whether they have different reading models depending on the two genres and whether there are any differences depending on the learners' proficiency levels. The analysis focuses on textual, critical, and aesthetic reading models in the reader responses written in English by science-gifted high school students (N=30). The results show that the participants have different reading models in reading an expository text and a narrative text. They tend to read the expository text in a more critical way while reading the narrative text in a more personal and emotional way. Moreover, regardless of the proficiency levels, they wrote longer responses on the narrative text than the expository text. However, the proficiency level of English does not support any significant differences in the types of reading models. The findings provide Korean EFL high school students' characteristics in L2 reading and suggest the pedagogical implication to pursue linguistic development as well as reading for pleasure.

  • PDF

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

ESL Students' Narratives of Writing Process: Multiplicity and Sociocultural Aspects

  • Kim, Ji-Young
    • English Language & Literature Teaching
    • /
    • v.17 no.1
    • /
    • pp.125-146
    • /
    • 2011
  • Within a framework of sociocultural approaches to writing process, this study examined six ESL graduate students' writing processes in depth based on individual interviews and their narratives of writing process. The narratives and interviews were analyzed to discover salient aspects of the students' writing processes and to understand the socially situated nature of the writing processes. First, it was observed that these six students displayed multiplicity in terms of their representations of writing process, episodes, textual practices, and concerns. Several factors including the writing task, students' familiarity with genre, literacy skills, attitude toward writing, and involvement in interaction contributed to individualized trajectories of writing process. It was also revealed that writing is unavoidably a socially situated practice. Students were situated in their cultural arenas as well as their disciplinary arenas, and these contexts helped the students serve as active agents producing and sharing knowledge. The confluence of personal, cognitive, and social factors observed in their writing processes suggests that writing process should be understood from multiple perspectives.

  • PDF

Intensified Sentiment Analysis of Customer Product Reviews Using Acoustic and Textual Features

  • Govindaraj, Sureshkumar;Gopalakrishnan, Kumaravelan
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.494-501
    • /
    • 2016
  • Sentiment analysis incorporates natural language processing and artificial intelligence and has evolved as an important research area. Sentiment analysis on product reviews has been used in widespread applications to improve customer retention and business processes. In this paper, we propose a method for performing an intensified sentiment analysis on customer product reviews. The method involves the extraction of two feature sets from each of the given customer product reviews, a set of acoustic features (representing emotions) and a set of lexical features (representing sentiments). These sets are then combined and used in a supervised classifier to predict the sentiments of customers. We use an audio speech dataset prepared from Amazon product reviews and downloaded from the YouTube portal for the purposes of our experimental evaluations.

Implementation of Meta Data-based Clinical Decision Support System for the Portability (이식성을 위한 메타데이터 기반의 CDSS 구축)

  • Lee, Sang Young;Lee, Yoon Hyeon;Lee, Yoon Seok
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.221-229
    • /
    • 2012
  • A model for expressing meta data syntax in the eXtensible Markup Language(XML) was developed to increase the portability of the Arden Syntax in medical treatment. In this model that is Arden syntax uses two syntax checking mechanisms, first an XML validation process, and second, a syntax check using an XSL style sheet. Two hundred seventy-seven examples of MLMs were transformed into MLMs in ArdenML and validated against the schema and style sheet. Both the original MLMs and reverse-parsed MLMs in ArdenML were checked using a Arden Syntax checker. The textual versions of MLMs were successfully transformed into XML documents using the model, and the reverse-parse yielded the original text version of MLMs.

Towards cross-platform interoperability for machine-assisted text annotation

  • de Castilho, Richard Eckart;Ide, Nancy;Kim, Jin-Dong;Klie, Jan-Christoph;Suderman, Keith
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.19.1-19.10
    • /
    • 2019
  • In this paper, we investigate cross-platform interoperability for natural language processing (NLP) and, in particular, annotation of textual resources, with an eye toward identifying the design elements of annotation models and processes that are particularly problematic for, or amenable to, enabling seamless communication across different platforms. The study is conducted in the context of a specific annotation methodology, namely machine-assisted interactive annotation (also known as human-in-the-loop annotation). This methodology requires the ability to freely combine resources from different document repositories, access a wide array of NLP tools that automatically annotate corpora for various linguistic phenomena, and use a sophisticated annotation editor that enables interactive manual annotation coupled with on-the-fly machine learning. We consider three independently developed platforms, each of which utilizes a different model for representing annotations over text, and each of which performs a different role in the process.

Supervised text data augmentation method for deep neural networks

  • Jaehwan Seol;Jieun Jung;Yeonseok Choi;Yong-Seok Choi
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.3
    • /
    • pp.343-354
    • /
    • 2023
  • Recently, there have been many improvements in general language models using architectures such as GPT-3 proposed by Brown et al. (2020). Nevertheless, training complex models can hardly be done if the number of data is very small. Data augmentation that addressed this problem was more than normal success in image data. Image augmentation technology significantly improves model performance without any additional data or architectural changes (Perez and Wang, 2017). However, applying this technique to textual data has many challenges because the noise to be added is veiled. Thus, we have developed a novel method for performing data augmentation on text data. We divide the data into signals with positive or negative meaning and noise without them, and then perform data augmentation using k-doc augmentation to randomly combine signals and noises from all data to generate new data.

An intelligent system for automatic data extraction in E-Commerce Applications

  • Cardenosa, Jesus;Iraola, Luis;Tovar, Edmundo
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.202-208
    • /
    • 2001
  • One of the most frequent uses of Internet is data gathering. Data can be about many themes but perhaps one of the most demanded fields is the tourist information. Normally, databases that support these systems are maintained manually. However, there is other approach, that is, to extract data automatically, for instance, from textual public information existing in the Web. This approach consists of extracting data from textual sources(public or not) and to serve them totally or partially to the user in the form that he/she wants. The obtained data can maintain automatically databases that support different systems as WAP mobile telephones, or commercial systems accessed by Natural Language Interfaces and others. This process has three main actors. The first is the information itself that is present in a particular context. The second is the information supplier (extracting data from the existing information) and the third is the user or information searcher. This added value chain reuse and give value to existing data even in the case that these data were not tough for the last use by the use of the described technology. The main advantage of this approach is that it makes independent the information source from the information user. This means that the original information belongs to a particular context, not necessarily the context of the user. This paper will describe the application based on this approach developed by the authors in the FLEX EXPRIT IV n$^{\circ}$EP29158 in the Work-package "Knowledge Extraction & Data mining"where the information captured from digital newspapers is extracted and reused in tourist information context.

  • PDF

Optical Character Recognition for Hindi Language Using a Neural-network Approach

  • Yadav, Divakar;Sanchez-Cuadrado, Sonia;Morato, Jorge
    • Journal of Information Processing Systems
    • /
    • v.9 no.1
    • /
    • pp.117-140
    • /
    • 2013
  • Hindi is the most widely spoken language in India, with more than 300 million speakers. As there is no separation between the characters of texts written in Hindi as there is in English, the Optical Character Recognition (OCR) systems developed for the Hindi language carry a very poor recognition rate. In this paper we propose an OCR for printed Hindi text in Devanagari script, using Artificial Neural Network (ANN), which improves its efficiency. One of the major reasons for the poor recognition rate is error in character segmentation. The presence of touching characters in the scanned documents further complicates the segmentation process, creating a major problem when designing an effective character segmentation technique. Preprocessing, character segmentation, feature extraction, and finally, classification and recognition are the major steps which are followed by a general OCR. The preprocessing tasks considered in the paper are conversion of gray scaled images to binary images, image rectification, and segmentation of the document's textual contents into paragraphs, lines, words, and then at the level of basic symbols. The basic symbols, obtained as the fundamental unit from the segmentation process, are recognized by the neural classifier. In this work, three feature extraction techniques-: histogram of projection based on mean distance, histogram of projection based on pixel value, and vertical zero crossing, have been used to improve the rate of recognition. These feature extraction techniques are powerful enough to extract features of even distorted characters/symbols. For development of the neural classifier, a back-propagation neural network with two hidden layers is used. The classifier is trained and tested for printed Hindi texts. A performance of approximately 90% correct recognition rate is achieved.