• Title/Summary/Keyword: semantic short term memory

Search Result 14, Processing Time 0.024 seconds

Language performance analysis based on multi-dimensional verbal short-term memories in patients with conduction aphasia (다차원 구어 단기기억에 따른 전도 실어증 환자의 언어수행력 분석)

  • Ha, Ji-Wan;Hwang, Yu Mi;Pyun, Sung-Bom
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.4
    • /
    • pp.425-455
    • /
    • 2012
  • Multi-dimensional verbal short-term memory mechanisms are largely divided into the phonological channel and the lexical-semantic channel. The former is called phonological short-term memory and the latter is called semantic short-term memory. Phonological short-term memory is further segmented into the phonological input buffer and the phonological output buffer. In this study, the language performance of each of three patients with similar levels of conduction aphasia was analyzed in terms of multi-dimensional verbal short-term memory. To this end, three patients with conduction aphasia were instructed to perform four different aspects of language tasks that are spontaneous speaking, repetition, spontaneous writing, and dictation in both word and sentence level. Moreover, the patients' phonological memories and semantic short-term memories were evaluated using digit span tests and verbal learning tests. As a result, the three subjects exhibited various types of performances and error responses in the four aspects of language tests, and the short-term memory tests also did not produce identical results. The language performance of three patients with conduction aphasia can be explained according to whether the defects occurred in the semantic short-term memory, phonological input buffer and/or phonological output buffer. In this study, the relations between language and multi-dimensional verbal short-term memory were discussed based on the results of language tests and short-term memory tests in patients with conduction aphasia.

  • PDF

The Biological Base of Learing and Memory(I):A Neuropsychological Review (학습과 기억의 생물학적 기초(I):신경심리학적 개관)

  • MunsooKim
    • Korean Journal of Cognitive Science
    • /
    • v.7 no.3
    • /
    • pp.7-36
    • /
    • 1996
  • Recebt neuropsychological studies on neurobiological bases of learning and memory in humans are reviewed. At present, cognitive psychologists belive that memory is not a unitary system. But copmosed of several independent subsystems. Adoption this perspective,this paper summarized findings regarding what kinds of memory discorders result from lesions of which brain areas and which brain areas are activated by what kind of learning/memory tasks. Short-term memory seems to involve widespread areas around the boundaries among the parietal,occipital,and temporal lobes,depending on the type of the type of the tasks and the way of presentation of the stimuli. Implicit memory,a subsystem of long-term memory,is not a unitary system itself. Thus,brain areas involved in implicit memory tasks used. It is well-known that medial temporal lobe is necessary for formation(i,e.,consolidation)of explicit memory,another subsystem of long-term memory. Storage and/or retrieval of episodic and semantic memory involve temporal neocortex. Perfromtal cortex seemas to be involved in several aspects of memory such as short term memory and retrieval of espisodic and semantic memory. Finally, a popular view on the locus of long-term memory storage is described.

  • PDF

Multi-layered attentional peephole convolutional LSTM for abstractive text summarization

  • Rahman, Md. Motiur;Siddiqui, Fazlul Hasan
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.288-298
    • /
    • 2021
  • Abstractive text summarization is a process of making a summary of a given text by paraphrasing the facts of the text while keeping the meaning intact. The manmade summary generation process is laborious and time-consuming. We present here a summary generation model that is based on multilayered attentional peephole convolutional long short-term memory (MAPCoL; LSTM) in order to extract abstractive summaries of large text in an automated manner. We added the concept of attention in a peephole convolutional LSTM to improve the overall quality of a summary by giving weights to important parts of the source text during training. We evaluated the performance with regard to semantic coherence of our MAPCoL model over a popular dataset named CNN/Daily Mail, and found that MAPCoL outperformed other traditional LSTM-based models. We found improvements in the performance of MAPCoL in different internal settings when compared to state-of-the-art models of abstractive text summarization.

Is it necessary to distinguish semantic memory from episodic memory\ulcorner (의미기억과 일화기억의 구분은 필요한가)

  • 이정모;박희경
    • Korean Journal of Cognitive Science
    • /
    • v.11 no.3_4
    • /
    • pp.33-43
    • /
    • 2000
  • The distinction between short-term store (STS) and long-term store (LTS) has been made in the perspective of information processing. Memory system theorists have argued that memory could be conceived as multiple memory systems beyond the concept of a single LTS. Popular memory system models are Schacter & Tulving (994)'s multiple memory systems and Squire (987)'s the taxonomy of long-term memory. Those m models agree that amnesic patients have intact STS but impaired LTS and have preserved implicit memory. However. there is a debate about the nature of the long-term memory impairment. One model considers amnesic deficit as a selective episodic memory impairment. whereas the other sees the deficits as both episodic and semantic memory impairment. At present, it remains unclear that episodic memory should be distinguished from semantic memory in terms of retrieval operation. The distinction between declarative memory and nondeclarative memory would be the alternative way to reflect explicit memory and implicit memory. The research focused on the function of frontal lobe might give clues to the debate about the nature of LTS.

  • PDF

Development of a Deep Learning Model for Detecting Fake Reviews Using Author Linguistic Features (작성자 언어적 특성 기반 가짜 리뷰 탐지 딥러닝 모델 개발)

  • Shin, Dong Hoon;Shin, Woo Sik;Kim, Hee Woong
    • The Journal of Information Systems
    • /
    • v.31 no.4
    • /
    • pp.01-23
    • /
    • 2022
  • Purpose This study aims to propose a deep learning-based fake review detection model by combining authors' linguistic features and semantic information of reviews. Design/methodology/approach This study used 358,071 review data of Yelp to develop fake review detection model. We employed linguistic inquiry and word count (LIWC) to extract 24 linguistic features of authors. Then we used deep learning architectures such as multilayer perceptron(MLP), long short-term memory(LSTM) and transformer to learn linguistic features and semantic features for fake review detection. Findings The results of our study show that detection models using both linguistic and semantic features outperformed other models using single type of features. In addition, this study confirmed that differences in linguistic features between fake reviewer and authentic reviewer are significant. That is, we found that linguistic features complement semantic information of reviews and further enhance predictive power of fake detection model.

A Study of Efficiency Information Filtering System using One-Hot Long Short-Term Memory

  • Kim, Hee sook;Lee, Min Hi
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.83-89
    • /
    • 2017
  • In this paper, we propose an extended method of one-hot Long Short-Term Memory (LSTM) and evaluate the performance on spam filtering task. Most of traditional methods proposed for spam filtering task use word occurrences to represent spam or non-spam messages and all syntactic and semantic information are ignored. Major issue appears when both spam and non-spam messages share many common words and noise words. Therefore, it becomes challenging to the system to filter correct labels between spam and non-spam. Unlike previous studies on information filtering task, instead of using only word occurrence and word context as in probabilistic models, we apply a neural network-based approach to train the system filter for a better performance. In addition to one-hot representation, using term weight with attention mechanism allows classifier to focus on potential words which most likely appear in spam and non-spam collection. As a result, we obtained some improvement over the performances of the previous methods. We find out using region embedding and pooling features on the top of LSTM along with attention mechanism allows system to explore a better document representation for filtering task in general.

Aspect-Based Sentiment Analysis with Position Embedding Interactive Attention Network

  • Xiang, Yan;Zhang, Jiqun;Zhang, Zhoubin;Yu, Zhengtao;Xian, Yantuan
    • Journal of Information Processing Systems
    • /
    • v.18 no.5
    • /
    • pp.614-627
    • /
    • 2022
  • Aspect-based sentiment analysis is to discover the sentiment polarity towards an aspect from user-generated natural language. So far, most of the methods only use the implicit position information of the aspect in the context, instead of directly utilizing the position relationship between the aspect and the sentiment terms. In fact, neighboring words of the aspect terms should be given more attention than other words in the context. This paper studies the influence of different position embedding methods on the sentimental polarities of given aspects, and proposes a position embedding interactive attention network based on a long short-term memory network. Firstly, it uses the position information of the context simultaneously in the input layer and the attention layer. Secondly, it mines the importance of different context words for the aspect with the interactive attention mechanism. Finally, it generates a valid representation of the aspect and the context for sentiment classification. The model which has been posed was evaluated on the datasets of the Semantic Evaluation 2014. Compared with other baseline models, the accuracy of our model increases by about 2% on the restaurant dataset and 1% on the laptop dataset.

Weibo Disaster Rumor Recognition Method Based on Adversarial Training and Stacked Structure

  • Diao, Lei;Tang, Zhan;Guo, Xuchao;Bai, Zhao;Lu, Shuhan;Li, Lin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.10
    • /
    • pp.3211-3229
    • /
    • 2022
  • To solve the problems existing in the process of Weibo disaster rumor recognition, such as lack of corpus, poor text standardization, difficult to learn semantic information, and simple semantic features of disaster rumor text, this paper takes Sina Weibo as the data source, constructs a dataset for Weibo disaster rumor recognition, and proposes a deep learning model BERT_AT_Stacked LSTM for Weibo disaster rumor recognition. First, add adversarial disturbance to the embedding vector of each word to generate adversarial samples to enhance the features of rumor text, and carry out adversarial training to solve the problem that the text features of disaster rumors are relatively single. Second, the BERT part obtains the word-level semantic information of each Weibo text and generates a hidden vector containing sentence-level feature information. Finally, the hidden complex semantic information of poorly-regulated Weibo texts is learned using a Stacked Long Short-Term Memory (Stacked LSTM) structure. The experimental results show that, compared with other comparative models, the model in this paper has more advantages in recognizing disaster rumors on Weibo, with an F1_Socre of 97.48%, and has been tested on an open general domain dataset, with an F1_Score of 94.59%, indicating that the model has better generalization.

Korean Semantic Role Labeling with Highway BiLSTM-CRFs (Highway BiLSTM-CRFs 모델을 이용한 한국어 의미역 결정)

  • Bae, Jangseong;Lee, Changki;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.159-162
    • /
    • 2017
  • Long Short-Term Memory Recurrent Neural Network(LSTM RNN)는 순차 데이터 모델링에 적합한 딥러닝 모델이다. Bidirectional LSTM RNN(BiLSTM RNN)은 RNN의 그래디언트 소멸 문제(vanishing gradient problem)를 해결한 LSTM RNN을 입력 데이터의 양 방향에 적용시킨 것으로 입력 열의 모든 정보를 볼 수 있는 장점이 있어 자연어처리를 비롯한 다양한 분야에서 많이 사용되고 있다. Highway Network는 비선형 변환을 거치지 않은 입력 정보를 히든레이어에서 직접 사용할 수 있게 LSTM 유닛에 게이트를 추가한 딥러닝 모델이다. 본 논문에서는 Highway Network를 한국어 의미역 결정에 적용하여 기존 연구 보다 더 높은 성능을 얻을 수 있음을 보인다.

  • PDF

Korean Semantic Role Labeling with Highway BiLSTM-CRFs (Highway BiLSTM-CRFs 모델을 이용한 한국어 의미역 결정)

  • Bae, Jangseong;Lee, Changki;Kim, Hyunki
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.159-162
    • /
    • 2017
  • Long Short-Term Memory Recurrent Neural Network(LSTM RNN)는 순차 데이터 모델링에 적합한 딥러닝 모델이다. Bidirectional LSTM RNN(BiLSTM RNN)은 RNN의 그래디언트 소멸 문제(vanishing gradient problem)를 해결한 LSTM RNN을 입력 데이터의 양 방향에 적용시킨 것으로 입력 열의 모든 정보를 볼 수 있는 장점이 있어 자연어처리를 비롯한 다양한 분야에서 많이 사용되고 있다. Highway Network는 비선형 변환을 거치지 않은 입력 정보를 히든레이어에서 직접 사용할 수 있게 LSTM 유닛에 게이트를 추가한 딥러닝 모델이다. 본 논문에서는 Highway Network를 한국어 의미역 결정에 적용하여 기존 연구 보다 더 높은 성능을 얻을 수 있음을 보인다.

  • PDF