• Title/Summary/Keyword: New Words

Search Result 1,475, Processing Time 0.224 seconds

Joke-Related Aspects and their Significance in Traditional Korean Funny Performing Arts (한국 전통연희에서의 재담의 양상과 그 의의)

  • Son, Tae-do
    • Journal of Korean Classical Literature and Education
    • /
    • no.32
    • /
    • pp.29-61
    • /
    • 2016
  • A joke (才談, 재담) is "the most interesting and witty language unit" in our speech. However, the search of a joke is still starting. Although joke are related to the witty and interesting talks, stories, songs and plays, the actual object of a joke is only the witty and interesting talk. A joke is witty talk that is interesting or laughter-inducing. Many Jokes can be found in the traditional Korean funny performing arts (演戱, 연희). This is because these art forms are performed in open yards, which necessitated amusing the audience, amusement, in its turn, required jokes. Jokes in the traditional funny performing arts can generally be classified as follows: 1) Jokes related to a situation: These include right words at a given situation, exaggerating words, diminishing words, deviancy words, and cause-effect words. 2) Jokes related to discourse: These include enumerating words, amplificatory words, contrasting words, fluently lying words, undeniable words, purposely unknowing words, and deliberately incorrect words. 3) Jokes related to vocabulary: These include synonym, similar words, changed word-ordering words, and incorrect words. 4) Jokes related to pronunciation: These include homonyms, and anti-homonyms. Although there may be other jokes, those presented above are typical ones. A joke is "the result that human being can achieve when he/she has overcome natural and social difficulties and is left with only a free and creative spirit." Jokes are necessary in all ages and everywhere. Today, more varied and high-level jokes can be created by developing the diversity of jokes in traditional funny performing arts. Also, I expect new sorts of jokes, because a joke always demands a creative spirit.

LEFT INFERIOR FRONTAL GYRUS RELATED TO REPETITION PRIMING: LORETA IMAGING WITH 128-CHANNEL EEG AND INDIVIDUAL MRI

  • Kim, Young-Youn;Kim, Eun-Nam;Roh, Ah-Young;Goong, Yoon-Nam;Kim, Myung-Sun;Kwon, Jun-Soo
    • Proceedings of the Korean Society for Cognitive Science Conference
    • /
    • 2005.05a
    • /
    • pp.151-153
    • /
    • 2005
  • We investigated the brain substrate of repetition priming on the implicit memory taskusing low-resolution electromagnetic tomography (LORETA) with high-density 128 channel EEG and individual MRI as a realistic head model. Thirteen right-handed, healthy subjects performed a word/nonword discrimination task, in which the words and nonwords were presented visually,and some of the words appeared twice with a lag of one or five items. All of the subjects exhibited repetition priming with respect to the behavioral data, in which a faster reaction time was observed to the repeated word (old word) than to the first presentation of the word (new word). The old words elicited more positive-going potentials than the new words, beginning at 200 ms and lasting until 500 ms post-stimulus. We conducted source reconstruction using LORETA at a latency of 400 ms with the peak mean global field potentials and used statistical parametric mapping for the statistical analysis. We found that the source elicited by the old words exhibited a statistically significant current density reduction in the left inferior frontal gyrus. This is the first study to investigate the generators of repetition priming using voxel-by-voxel statistical mapping of the current density with individual MRI and high-density EEG.

  • PDF

Morphological processing within the learning of new words: A study on individual differences (새로운 단어의 학습에서 형태소 처리의 영향: 개인차 연구)

  • Bae, Sungbong;Yi, Kwangoh;Masuda, Hisashi
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.2
    • /
    • pp.303-323
    • /
    • 2016
  • The present study aims to investigate how differences in terms of morphological awareness (MA) influence the learning of new words in young adults. Divided into two groups according to their MA, participants were asked to learn the meanings of rare Hanja words in both morphologically supported and unsupported sentence contexts. The results indicate that high-MA participants were more successful in learning the meanings of the words than the low-MA participants and that the group difference lasted for one week after learning. More importantly, the effect of MA was greater for rare words appearing within morphological supported sentences. These results suggest that both the availability of morphological analyses during learning and individual differences in MA influence the learning of word meanings.

Variable Vocabulary Word Recognizer using Phonetic Knowledge-based Allophone Model (음성학적 지식 기반 변이음 모델을 이용한 가변 어휘 단어 인식기)

  • Kim, Hoi-Rin;Lee, Hang-Seop
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.2
    • /
    • pp.31-35
    • /
    • 1997
  • In this paper, we propose a variable vocabulary word recognizer that is able to recognize new words not exist in training data. For the variable vocabulary word recognizer, we must have an on-line lexicon generator to transform new candidate words to the corresponding pronunciation sequences of phones without any large lexicon table. And, we also must make outputs. In order to model the phones and allophones reliably, we define Korean allophones by triphone clustering based on phonetic knowledge of preceding and succeeding phones of each phone. Using the clustering method, we generated 1,548 allophones with POW (Phonetically Optimized Words) 3,848 word DB. We evaluated the proposed word recognizer with POW 3,848 DB, PBW (Phonetically Balanced Words) 445 DB, and 244 word DB in hotel reservation task. Experimental results showed word recognition accuracy of 79.6% for the POW DB corresponding to vocabulary-dependent case, 79.4% in case of 445 word lexicon and 88.9% in case of 100 word lexicon for the PBW DB, and 71.4% for the hotel reservation DB corresponding to vocabulary-independent case.

  • PDF

Crossing the "Great Fire Wall": A Study with Grounded Theory Examining How China Uses Twitter as a New Battlefield for Public Diplomacy

  • Guo, Jing
    • Journal of Public Diplomacy
    • /
    • v.1 no.2
    • /
    • pp.49-74
    • /
    • 2021
  • In this paper, I applied grounded theory in exploring how Twitter became the battlefield for China's public diplomacy campaign. China's new move to global social media platforms, such as Twitter and Facebook, has been a controversial strategy in public diplomacy. This study analyzes Chinese Foreign Spokesperson Zhao Lijian's Twitter posts and comments. It models China's recent diplomatic move to Twitter as a "war of words" model, with features including "leadership," "polarization," and "aggression," while exerting possible effects as "resistance," "hatred," and "sarcasm" to the global community. Our findings show that by failing to gage public opinion and promote the country's positive image, China's current digital diplomacy strategy reflected by Zhao Lijian's tweets has instead constructed a polarized political public sphere, contradictory to the country's promoted "shared human destiny." The "war of words" model extends our understanding of China's new digital diplomacy move as a hybrid of state propaganda and self-performance. Such a strategy could spread hate speech and accelerate political polarization in cyberspace, despite improvements to China's homogenous network building on Twitter.

A Study on the Extraction of Emotional Words for Media Facade (내용분석 및 자유연상을 통한 미디어 파사드의 감성어휘 추출)

  • Lee, Seung-min;Bang, Kee-chun
    • Journal of Digital Contents Society
    • /
    • v.16 no.5
    • /
    • pp.741-748
    • /
    • 2015
  • The aim of this paper is to select a distinct vocabulary for understanding the media facade of user and to lay the foundation for a media facade emotional scale. Firstly, we assembled a set of emotional words that were sufficient to represent a general overview of korean emotions, collected from various literature studies. Secondly, we found emotional words from collecting user opinion on the Youtube website. Finally the emotional words were collected from phrase by using non-structural survey. The collected words were integrated according to standards and they were organized 39 pieces that can be used in the survey. As a result, we extracted 21 emotional words for measuring user's emotions expressed while watching media facade, such as 'novel', 'cool', 'awesome', 'gorgeous', 'exciting', 'amazing', 'wonderful,', 'showy', 'great,', 'intense', 'good', 'grand', 'colorful', 'unique', 'variety', 'new', 'fun', 'beautiful', 'luxurious,', 'mysterious', 'satisfactory'. And we categorized the 21 words to form 5 elements by using factor analysis such as 'surprise', 'attention', 'variety', 'aesthetics', 'interest'.

Vocabulary Coverage Improvement for Embedded Continuous Speech Recognition Using Part-of-Speech Tagged Corpus (품사 부착 말뭉치를 이용한 임베디드용 연속음성인식의 어휘 적용률 개선)

  • Lim, Min-Kyu;Kim, Kwang-Ho;Kim, Ji-Hwan
    • MALSORI
    • /
    • no.67
    • /
    • pp.181-193
    • /
    • 2008
  • In this paper, we propose a vocabulary coverage improvement method for embedded continuous speech recognition (CSR) using a part-of-speech (POS) tagged corpus. We investigate 152 POS tags defined in Lancaster-Oslo-Bergen (LOB) corpus and word-POS tag pairs. We derive a new vocabulary through word addition. Words paired with some POS tags have to be included in vocabularies with any size, but the vocabulary inclusion of words paired with other POS tags varies based on the target size of vocabulary. The 152 POS tags are categorized according to whether the word addition is dependent of the size of the vocabulary. Using expert knowledge, we classify POS tags first, and then apply different ways of word addition based on the POS tags paired with the words. The performance of the proposed method is measured in terms of coverage and is compared with those of vocabularies with the same size (5,000 words) derived from frequency lists. The coverage of the proposed method is measured as 95.18% for the test short message service (SMS) text corpus, while those of the conventional vocabularies cover only 93.19% and 91.82% of words appeared in the same SMS text corpus.

  • PDF

Word Sense Disambiguation based on Concept Learning with a focus on the Lowest Frequency Words (저빈도어를 고려한 개념학습 기반 의미 중의성 해소)

  • Kim Dong-Sung;Choe Jae-Woong
    • Language and Information
    • /
    • v.10 no.1
    • /
    • pp.21-46
    • /
    • 2006
  • This study proposes a Word Sense Disambiguation (WSD) algorithm, based on concept learning with special emphasis on statistically meaningful lowest frequency words. Previous works on WSD typically make use of frequency of collocation and its probability. Such probability based WSD approaches tend to ignore the lowest frequency words which could be meaningful in the context. In this paper, we show an algorithm to extract and make use of the meaningful lowest frequency words in WSD. Learning method is adopted from the Find-Specific algorithm of Mitchell (1997), according to which the search proceeds from the specific predefined hypothetical spaces to the general ones. In our model, this algorithm is used to find contexts with the most specific classifiers and then moves to the more general ones. We build up small seed data and apply those data to the relatively large test data. Following the algorithm in Yarowsky (1995), the classified test data are exhaustively included in the seed data, thus expanding the seed data. However, this might result in lots of noise in the seed data. Thus we introduce the 'maximum a posterior hypothesis' based on the Bayes' assumption to validate the noise status of the new seed data. We use the Naive Bayes Classifier and prove that the application of Find-Specific algorithm enhances the correctness of WSD.

  • PDF

A Study on the Direction of Art Policy through Semantic Network Analysis in New Normal Era (뉴노멀(New Normal) 시대 언어네트워크 분석에 의한 예술정책 방향 연구)

  • Kim, Mi Yeon;Kwon, Byeong Woong
    • Korean Association of Arts Management
    • /
    • no.58
    • /
    • pp.153-177
    • /
    • 2021
  • This study attempted to analyze language networks based on the theory of art policy in the New Normal era triggered by COVID-19 and domestic and foreign policy trends. For analysis, data containing key words of "Corona" and "Art" were collected from Google News and Web documents from March to September 2020 to extract 227 refined subject words, and the extracted subject words were analyzed as indicators of frequency and centrality of subject words through the Netminor program. In addition, visualization analysis of semantic networks has been attempted for the analysis of relationships between each topic languages. As a result of the semantic network analysis, the most frequent topic was "Corona," and "Culture and Art," "Art," "Performance," "Online" and "Support" were included in the group with the most frequencies. In the centrality analysis, "Corona" was the most popular, followed by "the era," "after," "post," "art," and "cultural arts," with high frequency, "Corona," "art," and "cultural arts" also dominated most centrality. In particular, the top-level key words in the analysis of frequency and centrality of the topic are 'online' and 'support' and 'policy'. This can be seen as indicating that the rapid rise of non-face-to-face and online content and support policies for the artistic communities are needed due to the dailyization of social distance due to COVID-19.

Investigation on the Effect of Multi-Vector Document Embedding for Interdisciplinary Knowledge Representation

  • Park, Jongin;Kim, Namgyu
    • Knowledge Management Research
    • /
    • v.21 no.1
    • /
    • pp.99-116
    • /
    • 2020
  • Text is the most widely used means of exchanging or expressing knowledge and information in the real world. Recently, researches on structuring unstructured text data for text analysis have been actively performed. One of the most representative document embedding method (i.e. doc2Vec) generates a single vector for each document using the whole corpus included in the document. This causes a limitation that the document vector is affected by not only core words but also other miscellaneous words. Additionally, the traditional document embedding algorithms map each document into only one vector. Therefore, it is not easy to represent a complex document with interdisciplinary subjects into a single vector properly by the traditional approach. In this paper, we introduce a multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. After introducing the previous study on multi-vector document embedding, we visually analyze the effects of the multi-vector document embedding method. Firstly, the new method vectorizes the document using only predefined keywords instead of the entire words. Secondly, the new method decomposes various subjects included in the document and generates multiple vectors for each document. The experiments for about three thousands of academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the multi-vector based method, we ascertained that the information and knowledge in complex documents can be represented more accurately by eliminating the interference among subjects.