• Title/Summary/Keyword: Language Memory

Search Result 443, Processing Time 0.022 seconds

Memory-based Pattern Completion in Database Semantics

  • Hausser Roland
    • Language and Information
    • /
    • v.9 no.1
    • /
    • pp.69-92
    • /
    • 2005
  • Pattern recognition in cognitive agents is based on (i) the uninterpreted input data (e.g. parameter values) provided by the agent's hardware devices and (ii) and interpreted patterns (e.g. templates) provided by the agent's memory. Computationally, the task consists in finding the memory data corresponding best to the input data, for any given input. Once the best fitting memory data have been found, the input is recognized by applying to it the interpretation which happens to be stored with the memorized pattern. This paper presents a fast converging procedure which starts from a few initially recognized items and then analyzes the remainder of the input by systematically checking for items shown by memory to have been related to the initial items in previous encounters. In this way, known patterns are tried first, and only when they have been exhausted, an elementary exploration of the input is commenced. Efficiency is improved further by choosing the candidate to be tested next according to frequency.

  • PDF

The effect of interview techniques on preschool children's memory accuracy and suggestibility (면접방식에 따른 유아의 기억 정확성 및 피암시성)

  • Woo Huyn-Kyung;Yi Soon-Hyung
    • Journal of Families and Better Life
    • /
    • v.23 no.1 s.73
    • /
    • pp.209-222
    • /
    • 2005
  • This study was conducted to investigate the effect of interview techniques on memory accuracy and suggestibility of preschool children. Forty-five preschool children participated in a magic show(target event) and 1 week later, all children received suggestive interview in two conditions(language condition & drawing condition). Another 1 week later, all children's recall contents of the magic show was assessed. During suggestive interview, children in drawing condition show more 'acception' response than children in language condition, and children in the question condition show less 'remember' response than children in drawing condition. In second interview children reported more words, and specially ones in language condition report more suggested words than ones in drawing condition. Finally, children's recalls were more accurate on controled informations of the event than on suggestive.

Abusive Detection Using Bidirectional Long Short-Term Memory Networks (양방향 장단기 메모리 신경망을 이용한 욕설 검출)

  • Na, In-Seop;Lee, Sin-Woo;Lee, Jae-Hak;Koh, Jin-Gwang
    • The Journal of Bigdata
    • /
    • v.4 no.2
    • /
    • pp.35-45
    • /
    • 2019
  • Recently, the damage with social cost of malicious comments is increasing. In addition to the news of talent committing suicide through the effects of malicious comments. The damage to malicious comments including abusive language and slang is increasing and spreading in various type and forms throughout society. In this paper, we propose a technique for detecting abusive language using a bi-directional long short-term memory neural network model. We collected comments on the web through the web crawler and processed the stopwords on unused words such as English Alphabet or special characters. For the stopwords processed comments, the bidirectional long short-term memory neural network model considering the front word and back word of sentences was used to determine and detect abusive language. In order to use the bi-directional long short-term memory neural network, the detected comments were subjected to morphological analysis and vectorization, and each word was labeled with abusive language. Experimental results showed a performance of 88.79% for a total of 9,288 comments screened and collected.

  • PDF

Robust Syntactic Annotation of Corpora and Memory-Based Parsing

  • Hinrichs, Erhard W.
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.1-1
    • /
    • 2002
  • This talk provides an overview of current work in my research group on the syntactic annotation of the T bingen corpus of spoken German and of the German Reference Corpus (Deutsches Referenzkorpus: DEREKO) of written texts. Morpho-syntactic and syntactic annotation as well as annotation of function-argument structure for these corpora is performed automatically by a hybrid architecture that combines robust symbolic parsing with finite-state methods ("chunk parsing" in the sense Abney) with memory-based parsing (in the sense of Daelemans). The resulting robust annotations can be used by theoretical linguists, who lire interested in large-scale, empirical data, and by computational linguists, who are in need of training material for a wide range of language technology applications. To aid retrieval of annotated trees from the treebank, a query tool VIQTORYA with a graphical user interface and a logic-based query language has been developed. VIQTORYA allows users to query the treebanks for linguistic structures at the word level, at the level of individual phrases, and at the clausal level.

  • PDF

An efficient Storage Reclamation Algorithm for RISC Parallel Processing (RISC 병렬 처리를 위한 기억공간의 효율적인 활용 알고리즘)

  • 이철원;임인칠
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.9
    • /
    • pp.703-711
    • /
    • 1991
  • In this paper, an efficient storage reclamation algorithm for RISC parallel processing in the object orented programming environments is presented. The memory management for the dynamic memory allocation and the frequent memory access in object oriented programming is the main factor that decreases RISC parallel processing performance. The proposed algorithm can be efficiently allocated the memory space of RISCy computer which is required the frequent memory access, so it can be increased RISC parallel processing performance. The proposed algorithm is verified the efficiency by implementing C language on SUN SPARC(4.3 BSD UNIX).

  • PDF

Manipulation of Memory Data Using SQL (SQL을 이용한 메모리 데이터 조작)

  • Ra, Young-Gook;Woo, Won-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.597-610
    • /
    • 2011
  • In database application developments, data coexists in memory and disk spaces. To manipulate the memory data, the general programing languages are used and to manipulate the disk data, SQL is used. In particular, the procedural languages for the memory manipulation are difficult to create and manage than declarative languages such as SQL. Thus, this paper shows that a particular structure of memory data, tree structured, can be manipulated by SQL. Most of all, the model data of the user interfaces can be represented by a tree structure and thus, it can be processed by SQL except non set computations. The non set computations could be done by helper classes. The SQL memory data manipulation is more suited to the database application developments which have few complex computations.

Hypernetwork Memory-Based Model for Infant's Language Learning (유아 언어학습에 대한 하이퍼망 메모리 기반 모델)

  • Lee, Ji-Hoon;Lee, Eun-Seok;Zhang, Byoung-Tak
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.12
    • /
    • pp.983-987
    • /
    • 2009
  • One of the critical themes in the language acquisition is its exposure to linguistic environments. Linguistic environments, which interact with infants, include not only human beings such as its parents but also artificially crafted linguistic media as their functioning elements. An infant learns a language by exploring these extensive language environments around it. Based on such large linguistic data exposure, we propose a machine learning based method on the cognitive mechanism that simulate flexibly and appropriately infant's language learning. The infant's initial stage of language learning comes with sentence learning and creation, which can be simulated by exposing it to a language corpus. The core of the simulation is a memory-based learning model which has language hypernetwork structure. The language hypernetwork simulates developmental and progressive language learning using the structure of new data stream through making it representing of high level connection between language components possible. In this paper, we simulates an infant's gradual and developmental learning progress by training language hypernetwork gradually using 32,744 sentences extracted from video scripts of commercial animation movies for children.

1-Pass Semi-Dynamic Network Decoding Using a Subnetwork-Based Representation for Large Vocabulary Continuous Speech Recognition (대어휘 연속음성인식을 위한 서브네트워크 기반의 1-패스 세미다이나믹 네트워크 디코딩)

  • Chung Minhwa;Ahn Dong-Hoon
    • MALSORI
    • /
    • no.50
    • /
    • pp.51-69
    • /
    • 2004
  • In this paper, we present a one-pass semi-dynamic network decoding framework that inherits both advantages of fast decoding speed from static network decoders and memory efficiency from dynamic network decoders. Our method is based on the novel language model network representation that is essentially of finite state machine (FSM). The static network derived from the language model network [1][2] is partitioned into smaller subnetworks which are static by nature or self-structured. The whole network is dynamically managed so that those subnetworks required for decoding are cached in memory. The network is near-minimized by applying the tail-sharing algorithm. Our decoder is evaluated on the 25k-word Korean broadcast news transcription task. In case of the search network itself, the network is reduced by 73.4% from the tail-sharing algorithm. Compared with the equivalent static network decoder, the semi-dynamic network decoder has increased at most 6% in decoding time while it can be flexibly adapted to the various memory configurations, giving the minimal usage of 37.6% of the complete network size.

  • PDF

A Goal Oriented Korean Dialog System based on Memory Network (Memory Network를 이용한 한국어 목적 대화 시스템 개발)

  • Choi, Min-Jin;Koo, Myoung-Wan
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.596-599
    • /
    • 2018
  • 본 논문은 일정 등록을 위한 대화 시스템 개발에 대한 연구를 수행하였다. 기계는 사용자가 요구하는 일정 등록, 일정 수정 및 일정 삭제 등 다양한 목적에 따라 이에 맞는 API를 호출하게 된다. DSCT 6가 제안한 방법을 활용하여 호출되는 API의 종류에 따라 사람과 기계와의 대화를 task 라 불리는 여러 종류의 소규모 목적 대화로 분류하였다. 그 후 분류된 목적 task를 위해 Memory Network 개발에 대한 연구를 수행하였다. 첫 번째로 분류된 task에 대한 실행 결과 75%, 두 번째 task 88%, 세 번째 task 89%, 마지막 모든 task를 합쳤을 때 90%의 성능을 확인할 수 있었다.

  • PDF

Spatiotemporal Grounding for a Language Based Cognitive System (언이기반의 인지시스템을 위한 시공간적 기초화)

  • Ahn, Hyun-Sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.111-119
    • /
    • 2009
  • For daily life interaction with human, robots need the capability of encoding and storing cognitive information and retrieving it contextually. In this paper, spatiotemporal grounding of cognitive information for a language based cognitive system is presented. The cognitive information of the event occurred at a robot is described with a sentence, stored in a memory, and retrieved contextually. Each sentence is parsed, discriminated with the functional type of it, and analyzed with argument structure for connecting to cognitive information. With the proposed grounding, the cognitive information is encoded to sentence form and stored in sentence memory with object descriptor. Sentences are retrieved for answering questions of human by searching temporal information from the sentence memory and doing spatial reasoning in schematic imagery. An experiment shows the feasibility and efficiency of the spatiotemporal grounding for advanced service robot.