• 제목/요약/키워드: linguistic features

검색결과 181건 처리시간 0.023초

Development of a Deep Learning Model for Detecting Fake Reviews Using Author Linguistic Features (작성자 언어적 특성 기반 가짜 리뷰 탐지 딥러닝 모델 개발)

  • Shin, Dong Hoon;Shin, Woo Sik;Kim, Hee Woong
    • The Journal of Information Systems
    • /
    • 제31권4호
    • /
    • pp.01-23
    • /
    • 2022
  • Purpose This study aims to propose a deep learning-based fake review detection model by combining authors' linguistic features and semantic information of reviews. Design/methodology/approach This study used 358,071 review data of Yelp to develop fake review detection model. We employed linguistic inquiry and word count (LIWC) to extract 24 linguistic features of authors. Then we used deep learning architectures such as multilayer perceptron(MLP), long short-term memory(LSTM) and transformer to learn linguistic features and semantic features for fake review detection. Findings The results of our study show that detection models using both linguistic and semantic features outperformed other models using single type of features. In addition, this study confirmed that differences in linguistic features between fake reviewer and authentic reviewer are significant. That is, we found that linguistic features complement semantic information of reviews and further enhance predictive power of fake detection model.

Korean EFL Learners' Sensitivity to Stylistic Differences in Their Letter Writing

  • Lee, Haemoon;Park, Heesoo
    • Journal of English Language & Literature
    • /
    • 제56권6호
    • /
    • pp.1163-1190
    • /
    • 2010
  • Korean EFL learners' stylistic sensitivity was examined through the two types of letter writing, professional and personal. The base of comparison with the English native speakers' stylistic sensitivity was the linguistic style markers that were statistically found by Biber's (1988) multi-dimensional model of variation of English language. The main finding was that Korean university students were sensitive to stylistic difference in the correct direction, though their linguistic repertoire was limited to the easy and simple linguistic features. Also, the learners were skewed in the involved style in both types of the letters unlike the native speakers and it was interpreted as due to the general developmental direction from informal to formal linguistic style. Learners were also skewed in the explicit style in both types of letters unlike the native speakers and it was interpreted as due to the learners' heavy reliance on one particular linguistic feature. As a whole, the learners' stylistic sensitivity heavily relied on the small number of linguistic features that they have already acquired, which happen to be simple and basic linguistic features.

Sentiment Analysis of Korean Using Effective Linguistic Features and Adjustment of Word Senses

  • Jang, Ha-Yeon;Shin, Hyo-Pil
    • Language and Information
    • /
    • 제14권2호
    • /
    • pp.33-46
    • /
    • 2010
  • This paper introduces a new linguistic-focused approach for sentiment analysis (SA) of Korean. In order to overcome shortcomings of previous works that focused mainly on statistical methods, we made effective use of various linguistic features reflecting the nature of Korean. These features include contextual shifters, modal affixes, and the morphological dependency of chunk structures. Moreover, in order to eschew possible confusion caused by ambiguous words and to improve the results of SA, we also proposed simple adjustment methods of word senses using KOLON ontology mapping information. Through experiments we contend that effective use of linguistic features and ontological information can improve the results of sentiment analysis of Korean.

  • PDF

Single Document Extractive Summarization Based on Deep Neural Networks Using Linguistic Analysis Features (언어 분석 자질을 활용한 인공신경망 기반의 단일 문서 추출 요약)

  • Lee, Gyoung Ho;Lee, Kong Joo
    • KIPS Transactions on Software and Data Engineering
    • /
    • 제8권8호
    • /
    • pp.343-348
    • /
    • 2019
  • In recent years, extractive summarization systems based on end-to-end deep learning models have become popular. These systems do not require human-crafted features and adopt data-driven approaches. However, previous related studies have shown that linguistic analysis features such as part-of-speeches, named entities and word's frequencies are useful for extracting important sentences from a document to generate a summary. In this paper, we propose an extractive summarization system based on deep neural networks using conventional linguistic analysis features. In order to prove the usefulness of the linguistic analysis features, we compare the models with and without those features. The experimental results show that the model with the linguistic analysis features improves the Rouge-2 F1 score by 0.5 points compared to the model without those features.

Comparison of Classification Performance Between Adult and Elderly Using Acoustic and Linguistic Features from Spontaneous Speech (자유대화의 음향적 특징 및 언어적 특징 기반의 성인과 노인 분류 성능 비교)

  • SeungHoon Han;Byung Ok Kang;Sunghee Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • 제12권8호
    • /
    • pp.365-370
    • /
    • 2023
  • This paper aims to compare the performance of speech data classification into two groups, adult and elderly, based on the acoustic and linguistic characteristics that change due to aging, such as changes in respiratory patterns, phonation, pitch, frequency, and language expression ability. For acoustic features we used attributes related to the frequency, amplitude, and spectrum of speech voices. As for linguistic features, we extracted hidden state vector representations containing contextual information from the transcription of speech utterances using KoBERT, a Korean pre-trained language model that has shown excellent performance in natural language processing tasks. The classification performance of each model trained based on acoustic and linguistic features was evaluated, and the F1 scores of each model for the two classes, adult and elderly, were examined after address the class imbalance problem by down-sampling. The experimental results showed that using linguistic features provided better performance for classifying adult and elderly than using acoustic features, and even when the class proportions were equal, the classification performance for adult was higher than that for elderly.

Crowdfunding Scams: The Profiles and Language of Deceivers

  • Lee, Seung-hun;Kim, Hyun-chul
    • Journal of the Korea Society of Computer and Information
    • /
    • 제23권3호
    • /
    • pp.55-62
    • /
    • 2018
  • In this paper, we propose a model to detect crowdfunding scams, which have been reportedly occurring over the last several years, based on their project information and linguistic features. To this end, we first collect and analyze crowdfunding scam projects, and then reveal which specific project-related information and linguistic features are particularly useful in distinguishing scam projects from non-scams. Our proposed model built with the selected features and Random Forest machine learning algorithm can successfully detect scam campaigns with 84.46% accuracy.

Improvements on Phrase Breaks Prediction Using CRF (Conditional Random Fields) (CRF를 이용한 운율경계추성 성능개선)

  • Kim Seung-Won;Lee Geun-Bae;Kim Byeong-Chang
    • MALSORI
    • /
    • 제57호
    • /
    • pp.139-152
    • /
    • 2006
  • In this paper, we present a phrase break prediction method using CRF(Conditional Random Fields), which has good performance at classification problems. The phrase break prediction problem was mapped into a classification problem in our research. We trained the CRF using the various linguistic features which was extracted from POS(Part Of Speech) tag, lexicon, length of word, and location of word in the sentences. Combined linguistic features were used in the experiments, and we could collect some linguistic features which generate good performance in the phrase break prediction. From the results of experiments, we can see that the proposed method shows improved performance on previous methods. Additionally, because the linguistic features are independent of each other in our research, the proposed method has higher flexibility than other methods.

  • PDF

ON THE INCANTATORY FEATURES OF KOREAN SHAMANIC LANGUAGE (한국 무속어의 주술적 특성과 그 해석)

  • Choong-yon Park
    • Lingua Humanitatis
    • /
    • 제1권1호
    • /
    • pp.295-321
    • /
    • 2001
  • This paper attempts to demonstrate how the linguistic and mythological features of the shamanic language make it incantatory, or ′enchanting′. Passages used in shamanic rites manifest linguistic characteristics that point to their own norms and conventions, as well as some mythological features that contribute to the undecipherablity of the shamanic language. Focusing on the estranged linguistic and mythological features, I propose that shamanic languages can be best interpreted in terms of the linguistic hierarchization, a notion that has been developed since Roman Jakobson′s poetics. The present study adopts Eisele′s framework that reinterprets Jakobsonian hierarchization into a slightly revised notion on the basis of the "degree of combinatorial freedom" and the "degree of semantic immediacy", looking into a set of paradigm examples in search of some parallel structures characterizing the shamanic language. The enchanting effect of this peculiar form of language, it is argued, is due mostly to the frequent use of lexical parallelism, which works in the reverse direction of the normal process of interpretation.

  • PDF

Feature-Based Relation Classification Using Quantified Relatedness Information

  • Huang, Jin-Xia;Choi, Key-Sun;Kim, Chang-Hyun;Kim, Young-Kil
    • ETRI Journal
    • /
    • 제32권3호
    • /
    • pp.482-485
    • /
    • 2010
  • Feature selection is very important for feature-based relation classification tasks. While most of the existing works on feature selection rely on linguistic information acquired using parsers, this letter proposes new features, including probabilistic and semantic relatedness features, to manifest the relatedness between patterns and certain relation types in an explicit way. The impact of each feature set is evaluated using both a chi-square estimator and a performance evaluation. The experiments show that the impact of relatedness features is superior to existing well-known linguistic features, and the contribution of relatedness features cannot be substituted using other normally used linguistic feature sets.

Linguistic Features Discrimination for Social Issue Risk Classification (사회적 이슈 리스크 유형 분류를 위한 어휘 자질 선별)

  • Oh, Hyo-Jung;Yun, Bo-Hyun;Kim, Chan-Young
    • KIPS Transactions on Software and Data Engineering
    • /
    • 제5권11호
    • /
    • pp.541-548
    • /
    • 2016
  • The use of social media is already essential as a source of information for listening user's various opinions and monitoring. We define social 'risks' that issues effect negative influences for public opinion in social media. This paper aims to discriminate various linguistic features and reveal their effects for building an automatic classification model of social risks. Expecially we adopt a word embedding technique for representation of linguistic clues in risk sentences. As a preliminary experiment to analyze characteristics of individual features, we revise errors in automatic linguistic analysis. At the result, the most important feature is NE (Named Entity) information and the best condition is when combine basic linguistic features. word embedding, and word clusters within core predicates. Experimental results under the real situation in social bigdata - including linguistic analysis errors - show 92.08% and 85.84% in precision respectively for frequent risk categories set and full test set.