• Title/Summary/Keyword: text input

Search Result 358, Processing Time 0.024 seconds

A Design on Informal Big Data Topic Extraction System Based on Spark Framework (Spark 프레임워크 기반 비정형 빅데이터 토픽 추출 시스템 설계)

  • Park, Kiejin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.11
    • /
    • pp.521-526
    • /
    • 2016
  • As on-line informal text data have massive in its volume and have unstructured characteristics in nature, there are limitations in applying traditional relational data model technologies for data storage and data analysis jobs. Moreover, using dynamically generating massive social data, social user's real-time reaction analysis tasks is hard to accomplish. In the paper, to capture easily the semantics of massive and informal on-line documents with unsupervised learning mechanism, we design and implement automatic topic extraction systems according to the mass of the words that consists a document. The input data set to the proposed system are generated first, using N-gram algorithm to build multiple words to capture the meaning of the sentences precisely, and Hadoop and Spark (In-memory distributed computing framework) are adopted to run topic model. In the experiment phases, TB level input data are processed for data preprocessing and proposed topic extraction steps are applied. We conclude that the proposed system shows good performance in extracting meaningful topics in time as the intermediate results come from main memories directly instead of an HDD reading.

Development of Nuclear Piping Integriry Expert System (II) -System Development and Case Studies- (원자력배관 건전성평가 전문가시스템 개발(II) -시스템 개발 및 사례해석-)

  • Jeon, Hyeon-Gyu;Heo, Nam-Su;Kim, Yeong-Jin;Park, Yun-Won;Choe, Yeong-Hwan
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.25 no.6
    • /
    • pp.1015-1022
    • /
    • 2001
  • The objective of this paper is to develop an expert system called NPIES for nuclear piping integrity. This paper describes the structure and the development strategy of the NPIES system. The NPIES system consists of 3 part; the data input part, the analysis part and the output part. The data input part consists of the material properties database module and the suer interface module. The analysis part consists of the LEFM, CDFD, J/T, limit load modules and the 12 analysis routines for different cracks and loading conditions are provided respectively. Analysis results are presented to screen, printer and text file in the output part. Several case studies on circumferentially cracked piping were performed to evaluate the accuracy and the usefulness of the code. Maximum piping loads predicted by the NPIES system agreed well with those by the 3-dimensional finite element analysis. In addition, even if the material properties were not fully given, the NPIES system provided reasonable evaluation results with the predicted material properties inferred from the material properties database module.

A Study on the Korean-Stroke based Graphical Password Approach (한국어 획 기반 그래피컬 패스워드 기법에 관한 연구)

  • Ko, Tae-Hyoung;Shon, Tae-Shik;Hong, Man-Pyo
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.2
    • /
    • pp.189-200
    • /
    • 2012
  • With increasing the number of smart device such as Tablet PC, smart phone and netbook, information security which based on smart device in mobile environment have become the issue. It is important to enter a password safety. In various types of mobile devices, because of hardware limitation of device, it is difficult that to equip secondary input device such as keyboard and mouse. Also, a loss of accuracy becomes a problem because input information was entered by touch screen. Because of problem mentioned above it can be predicted to change password scheme text based password scheme to graphical password scheme, graphical password scheme is easy to use and is resistant to shoulder surfing attack. So this paper proposes new graphical password scheme based 5 strokes which are made by decomposed the Korean to defend against shoulder surfing attack.

Analysis of deep learning-based deep clustering method (딥러닝 기반의 딥 클러스터링 방법에 대한 분석)

  • Hyun Kwon;Jun Lee
    • Convergence Security Journal
    • /
    • v.23 no.4
    • /
    • pp.61-70
    • /
    • 2023
  • Clustering is an unsupervised learning method that involves grouping data based on features such as distance metrics, using data without known labels or ground truth values. This method has the advantage of being applicable to various types of data, including images, text, and audio, without the need for labeling. Traditional clustering techniques involve applying dimensionality reduction methods or extracting specific features to perform clustering. However, with the advancement of deep learning models, research on deep clustering techniques using techniques such as autoencoders and generative adversarial networks, which represent input data as latent vectors, has emerged. In this study, we propose a deep clustering technique based on deep learning. In this approach, we use an autoencoder to transform the input data into latent vectors, and then construct a vector space according to the cluster structure and perform k-means clustering. We conducted experiments using the MNIST and Fashion-MNIST datasets in the PyTorch machine learning library as the experimental environment. The model used is a convolutional neural network-based autoencoder model. The experimental results show an accuracy of 89.42% for MNIST and 56.64% for Fashion-MNIST when k is set to 10.

Corpus of Eye Movements in L3 Spanish Reading: A Prediction Model

  • Hui-Chuan Lu;Li-Chi Kao;Zong-Han Li;Wen-Hsiang Lu;An-Chung Cheng
    • Asia Pacific Journal of Corpus Research
    • /
    • v.5 no.1
    • /
    • pp.23-36
    • /
    • 2024
  • This research centers on the Taiwan Eye-Movement Corpus of Spanish (TECS), a specially created corpus comprising eye-tracking data from Chinese-speaking learners of Spanish as a third language in Taiwan. Its primary purpose is to explore the broad utility of TECS in understanding language learning processes, particularly the initial stages of language learning. Constructing this corpus involves gathering data on eye-tracking, reading comprehension, and language proficiency to develop a machine-learning model that predicts learner behaviors, and subsequently undergoes a predictability test for validation. The focus is on examining attention in input processing and their relationship to language learning outcomes. The TECS eye-tracking data consists of indicators derived from eye movement recordings while reading Spanish sentences with temporal references. These indicators are obtained from eye movement experiments focusing on tense verbal inflections and temporal adverbs. Chinese expresses tense using aspect markers, lexical references, and contextual cues, differing significantly from inflectional languages like Spanish. Chinese-speaking learners of Spanish face particular challenges in learning verbal morphology and tenses. The data from eye movement experiments were structured into feature vectors, with learner behaviors serving as class labels. After categorizing the collected data, we used two types of machine learning methods for classification and regression: Random Forests and the k-nearest neighbors algorithm (KNN). By leveraging these algorithms, we predicted learner behaviors and conducted performance evaluations to enhance our understanding of the nexus between learner behaviors and language learning process. Future research may further enrich TECS by gathering data from subsequent eye-movement experiments, specifically targeting various Spanish tenses and temporal lexical references during text reading. These endeavors promise to broaden and refine the corpus, advancing our understanding of language processing.

Study on a Methodology for Developing Shanghanlun Ontology (상한론(傷寒論)온톨로지 구축 방법론 연구)

  • Jung, Tae-Young;Kim, Hee-Yeol;Park, Jong-Hyun
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.25 no.5
    • /
    • pp.765-772
    • /
    • 2011
  • Knowledge which is represented by formal logic are widely used in many domains such like artificial intelligence, information retrieval, e-commerce and so on. And for medical field, medical documentary records retrieval, information systems in hospitals, medical data sharing, remote treatment and expert systems need knowledge representation technology. To retrieve information intellectually and provide advanced information services, systematically controlled mechanism is needed to represent and share knowledge. Importantly, medical expert's knowledge should be represented in a form that is understandable to computers and also to humans to be applied to the medical information system supporting decision making. And it should have a suitable and efficient structure for its own purposes including reasoning, extendability of knowledge, management of data, accuracy of expressions, diversity, and so on. we call it ontology which can be processed with machines. We can use the ontology to represent traditional medicine knowledge in structured and systematic way with visualization, then also it can also be used education materials. Hence, the authors developed an Shanghanlun ontology by way of showing an example, so that we suggested a methodology for ontology development and also a model to structure the traditional medical knowledge. And this result can be used for student to learn Shanghanlun by graphical representation of it's knowledge. We analyzed the text of Shanghanlun to construct relational database including it's original text, symptoms and herb formulars. And then we classified the terms following some criterion, confirmed the structure of the ontology to describe semantic relations between the terms, especially we developed the ontology considering visual representation. The ontology developed in this study provides database showing fomulas, herbs, symptoms, the name of diseases and the text written in Shanghanlun. It's easy to retrieve contents by their semantic relations so that it is convenient to search knowledge of Shanghanlun and to learn it. It can display the related concepts by searching terms and provides expanded information with a simple click. It has some limitations such as standardization problems, short coverage of pattern(證), and error in chinese characters input. But we believe this research can be used for basic foundation to make traditional medicine more structural and systematic, to develop application softwares, and also to applied it in Shanghanlun educations.

The Development of an Automatic Indexing System based on a Thesaurus (시소러스를 기반으로 하는 자동색인 시스템에 관한 연구)

  • 임형묵;정상철
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.1
    • /
    • pp.213-242
    • /
    • 1993
  • During the past decades,several automatic indexing systems have been developed such as single term indexing.phrase indexing and thesaurus basedidndexing systems.Among these systems,single term indexing has been known as superior to others despte its simpicity of extracting meaningful terms.On the other hand,thesaurus based one has been conceived as producing low retrival rate ,mainly because thesauri do not usually have enough index terms.so that much of text data fail to be indexed if they do not match with any of index terms in thesauri.This paper develops a thesaurus based indexing system THINS that yields higher retrieval rate than other systems.by doing syntactic analysis of text data and matching them with index terms in thesauri partially.First,the system analyzes the input text syntactically by using the machine translation suystem MATES/EK and extracts noun phrases.After deleting stop words from noun phrases and stemming the remaining ones.it tries to index these with similar index terms in the thesaurus as much as possible. We conduct an experiment with CACM data set that measures the retrieval effectiveness with CACM data set that measures the retrieval effectuvenss of THINS with single term based one under HYKIS-a thesaurus based information retrieval system.It turns out that THINS yields about 10 percent higher precision than single term based one.while shows 8to9 percent lower recall.This retrieval rate shows that THINS improves much better than privious ones that only yields 25 or 30 percent lower precision than single term based one.We also argue that the relatively lower recall is cause by that CRCS-the thesaurus included in CACM datea set is very incomplete one,having only more than one thousand terms,thus THINS is expected to produce much higher rate if it is associated with currently available large thesaurus.

A Study of Pre-trained Language Models for Korean Language Generation (한국어 자연어생성에 적합한 사전훈련 언어모델 특성 연구)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.309-328
    • /
    • 2022
  • This study empirically analyzed a Korean pre-trained language models (PLMs) designed for natural language generation. The performance of two PLMs - BART and GPT - at the task of abstractive text summarization was compared. To investigate how performance depends on the characteristics of the inference data, ten different document types, containing six types of informational content and creation content, were considered. It was found that BART (which can both generate and understand natural language) performed better than GPT (which can only generate). Upon more detailed examination of the effect of inference data characteristics, the performance of GPT was found to be proportional to the length of the input text. However, even for the longest documents (with optimal GPT performance), BART still out-performed GPT, suggesting that the greatest influence on downstream performance is not the size of the training data or PLMs parameters but the structural suitability of the PLMs for the applied downstream task. The performance of different PLMs was also compared through analyzing parts of speech (POS) shares. BART's performance was inversely related to the proportion of prefixes, adjectives, adverbs and verbs but positively related to that of nouns. This result emphasizes the importance of taking the inference data's characteristics into account when fine-tuning a PLMs for its intended downstream task.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Design and Implementation of the Java Applet-based Courseware (Java Applet 기반 코스웨어의 설계 및 구현)

  • Kim, Kyu-Soo;Kim, Hyun-Bae
    • Journal of The Korean Association of Information Education
    • /
    • v.4 no.2
    • /
    • pp.179-186
    • /
    • 2001
  • The purpose of this study is to design and implement a courseware that makes possible interaction between man and computer in the internet. For this, We select the contents of learning and designe a courseware with text, graphic data. HTML, Java script and Java applet. Some advantages of the courseware are as follows. Interactions between man and computer are possible by giving diverse feedback to input-response in the web. And it is possible to access the courseware regardless of time and space when the network environment of user's computer is suitably equipped. Finally, on operator's part, the revision of the courseware becomes easier and on client's part, the system resources are less required.

  • PDF