• Title/Summary/Keyword: Word Input

Search Result 225, Processing Time 0.024 seconds

Language performance analysis based on multi-dimensional verbal short-term memories in patients with conduction aphasia (다차원 구어 단기기억에 따른 전도 실어증 환자의 언어수행력 분석)

  • Ha, Ji-Wan;Hwang, Yu Mi;Pyun, Sung-Bom
    • Korean Journal of Cognitive Science
    • /
    • v.23 no.4
    • /
    • pp.425-455
    • /
    • 2012
  • Multi-dimensional verbal short-term memory mechanisms are largely divided into the phonological channel and the lexical-semantic channel. The former is called phonological short-term memory and the latter is called semantic short-term memory. Phonological short-term memory is further segmented into the phonological input buffer and the phonological output buffer. In this study, the language performance of each of three patients with similar levels of conduction aphasia was analyzed in terms of multi-dimensional verbal short-term memory. To this end, three patients with conduction aphasia were instructed to perform four different aspects of language tasks that are spontaneous speaking, repetition, spontaneous writing, and dictation in both word and sentence level. Moreover, the patients' phonological memories and semantic short-term memories were evaluated using digit span tests and verbal learning tests. As a result, the three subjects exhibited various types of performances and error responses in the four aspects of language tests, and the short-term memory tests also did not produce identical results. The language performance of three patients with conduction aphasia can be explained according to whether the defects occurred in the semantic short-term memory, phonological input buffer and/or phonological output buffer. In this study, the relations between language and multi-dimensional verbal short-term memory were discussed based on the results of language tests and short-term memory tests in patients with conduction aphasia.

  • PDF

Design of Low-Noise and High-Reliability Differential Paired eFuse OTP Memory (저잡음 · 고신뢰성 Differential Paired eFuse OTP 메모리 설계)

  • Kim, Min-Sung;Jin, Liyan;Hao, Wenchao;Ha, Pan-Bong;Kim, Young-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.10
    • /
    • pp.2359-2368
    • /
    • 2013
  • In this paper, an IRD (internal read data) circuit preventing the reentry into the read mode while keeping the read-out DOUT datum at power-up even if noise such as glitches occurs at signal ports such as an input signal port RD (read) when a power IC is on, is proposed. Also, a pulsed WL (word line) driving method is used to prevent a DC current of several tens of micro amperes from flowing into the read transistor of a differential paired eFuse OTP cell. Thus, reliability is secured by preventing non-blown eFuse links from being blown by the EM (electro-migration). Furthermore, a compared output between a programmed datum and a read-out datum is outputted to the PFb (pass fail bar) pin while performing a sensing margin test with a variable pull-up load in consideration of resistance variation of a programmed eFuse in the program-verify-read mode. The layout size of the 8-bit eFuse OTP IP with a $0.18{\mu}m$ process is $189.625{\mu}m{\times}138.850{\mu}m(=0.0263mm^2)$.

Functional MRI of Language: Difference of its Activated Areas and Lateralization according to the Input Modality (언어의 기능적 자기공명영상: 자극방법에 따른 활성화와 편재화의 차이)

  • Ryoo, Jae-Wook;Cho, Jae-Min;Choi, Ho-Chul;Park, Mi-Jung;Choi, Hye-Young;Kim, Ji-Eun;Han, Heon;Kim, Sam-Soo;Jeon, Yong-Hwan;Khang, Hyun-Soo
    • Investigative Magnetic Resonance Imaging
    • /
    • v.15 no.2
    • /
    • pp.130-138
    • /
    • 2011
  • Purpose : To compare fMRIs of visual and auditory word generation tasks, and to evaluate the difference of its activated areas and lateralization according to the mode of stimuli. Materials and Methods : Eight male normal volunteers were included and all were right handed. Functional maps were obtained during auditory and visual word generation tasks in all. Normalized group analysis were performed in each task and the threshold for significance was set at p<0.05. Activated areas in each task were compared visually and statistically. Results : In both tasks, left dominant activations were demonstrated and were more lateralized in visual task. Both frontal lobes (Broca's area, premotor area, and SMA) and left posterior middle temporal gyrus were activated in both tasks. Extensive bilateral temporal activations were noted in auditory task. Both occipital and parietal activations were demonstrated in visual task. Conclusion : Modality independent areas could be interpreted as a core area of language function. Modality specific areas may be associated with processing of stimuli. Visual task induced more lateralized activation and could be a more useful in language study than auditory task.

Detecting Spelling Errors by Comparison of Words within a Document (문서내 단어간 비교를 통한 철자오류 검출)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.83-92
    • /
    • 2011
  • Typographical errors by the author's mistyping occur frequently in a document being prepared with word processors contrary to usual publications. Preparing this online document, the most common orthographical errors are spelling errors resulting from incorrectly typing intent keys to near keys on keyboard. Typical spelling checkers detect and correct these errors by using morphological analyzer. In other words, the morphological analysis module of a speller tries to check well-formedness of input words, and then all words rejected by the analyzer are regarded as misspelled words. However, if morphological analyzer accepts even mistyped words, it treats them as correctly spelled words. In this paper, I propose a simple method capable of detecting and correcting errors that the previous methods can not detect. Proposed method is based on the characteristics that typographical errors are generally not repeated and so tend to have very low frequency. If words generated by operations of deletion, exchange, and transposition for each phoneme of a low frequency word are in the list of high frequency words, some of them are considered as correctly spelled words. Some heuristic rules are also presented to reduce the number of candidates. Proposed method is able to detect not syntactic errors but some semantic errors, and useful to scoring candidates.

FFT/IFFT IP Generator for OFDM Modems (OFDM 모뎀용 FFT/IFFT IP 자동 생성기)

  • Lee Jin-Woo;Shin Kyung-Wook;Kim Jong-Whan;Baek Young-Seok;Eo Ik-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.3A
    • /
    • pp.368-376
    • /
    • 2006
  • This paper describes a Fcore_GenSim(Parameterized FFT Core Generation & Simulation Program), which can be used as an essential If(Intellectual Property) in various OFDM modem designs. The Fcore_Gensim is composed of two parts, a parameterized core generator(PFFT_CoreGen) that generates Verilog-HDL models of FFT cores, and a fixed-point FFT simulator(FXP_FFTSim) which can be used to estimate the SQNR performance of the generated cores. The parameters that can be specified for core generation are FFT length in the range of 64 ~2048-point and word-lengths of input/output/internal/twiddle data in the range of 8-b "24-b with 2-b step. Total 43,659 FFT cores can be generated by Fcore_Gensim. In addition, CBFP(Convergent Block Floating Point) scaling can be optionally specified. To achieve an optimized hardware and SQNR performance of the generated core, a hybrid structure of R2SDF and R2SDC stages and a hybrid algorithm of radix-2, radix-2/4, radix-2/4/8 are adopted according to FFT length and CBFP scaling.

A Study on the Extraction of Psychological Distance Embedded in Company's SNS Messages Using Machine Learning (머신 러닝을 활용한 회사 SNS 메시지에 내포된 심리적 거리 추출 연구)

  • Seongwon Lee;Jin Hyuk Kim
    • Information Systems Review
    • /
    • v.21 no.1
    • /
    • pp.23-38
    • /
    • 2019
  • The social network service (SNS) is one of the important marketing channels, so many companies actively exploit SNSs by posting SNS messages with appropriate content and style for their customers. In this paper, we focused on the psychological distances embedded in the SNS messages and developed a method to measure the psychological distance in SNS message by mixing a traditional content analysis, natural language processing (NLP), and machine learning. Through a traditional content analysis by human coding, the psychological distance was extracted from the SNS message, and these coding results were used for input data for NLP and machine learning. With NLP, word embedding was executed and Bag of Word was created. The Support Vector Machine, one of machine learning techniques was performed to train and test the psychological distance in SNS message. As a result, sensitivity and precision of SVM prediction were significantly low because of the extreme skewness of dataset. We improved the performance of SVM by balancing the ratio of data by upsampling technique and using data coded with the same value in first content analysis. All performance index was more than 70%, which showed that psychological distance can be measured well.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

Addressing Low-Resource Problems in Statistical Machine Translation of Manual Signals in Sign Language (말뭉치 자원 희소성에 따른 통계적 수지 신호 번역 문제의 해결)

  • Park, Hancheol;Kim, Jung-Ho;Park, Jong C.
    • Journal of KIISE
    • /
    • v.44 no.2
    • /
    • pp.163-170
    • /
    • 2017
  • Despite the rise of studies in spoken to sign language translation, low-resource problems of sign language corpus have been rarely addressed. As a first step towards translating from spoken to sign language, we addressed the problems arising from resource scarcity when translating spoken language to manual signals translation using statistical machine translation techniques. More specifically, we proposed three preprocessing methods: 1) paraphrase generation, which increases the size of the corpora, 2) lemmatization, which increases the frequency of each word in the corpora and the translatability of new input words in spoken language, and 3) elimination of function words that are not glossed into manual signals, which match the corresponding constituents of the bilingual sentence pairs. In our experiments, we used different types of English-American sign language parallel corpora. The experimental results showed that the system with each method and the combination of the methods improved the quality of manual signals translation, regardless of the type of the corpora.

An Experimental Study on Automatic Summarization of Multiple News Articles (복수의 신문기사 자동요약에 관한 실험적 연구)

  • Kim, Yong-Kwang;Chung, Young-Mee
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.1 s.59
    • /
    • pp.83-98
    • /
    • 2006
  • This study proposes a template-based method of automatic summarization of multiple news articles using the semantic categories of sentences. First, the semantic categories for core information to be included in a summary are identified from training set of documents and their summaries. Then, cue words for each slot of the template are selected for later classification of news sentences into relevant slots. When a news article is input, its event/accident category is identified, and key sentences are extracted from the news article and filled in the relevant slots. The template filled with simple sentences rather than original long sentences is used to generate a summary for an event/accident. In the user evaluation of the generated summaries, the results showed the 54.l% recall ratio and the 58.l% precision ratio in essential information extraction and 11.6% redundancy ratio.