• Title/Summary/Keyword: dictionary-based

Search Result 555, Processing Time 0.023 seconds

Analysis and the Standardization Plan of the Terms Used by Seafarers on Small Vessel (소형선박 종사자 사용용어 실태 분석 및 표준화 방안)

  • Kang, Suk-Young;Ryu, Won;Bae, Chang-Won;Kim, Jong-Kwan
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.7
    • /
    • pp.867-873
    • /
    • 2019
  • As of August 2019, there were 3,823 vessels under 30 tons that could be included in the category of small vessels; these account for 42.5 % of the 9,001 registered vessels in Korea. The problem is that many small vessel seafarers face many problems such as an board communication disconnection, difficulties in communication in maritime license interviews, or education related to maritime training using a large number of nonstandard terms, which are derived from foreign languages; this is leading to a decline the job skills of small vessel seafarers. Therefore, in this study, we closely analyzed the terminology of small vessel seafarers and proposed a standardization plan. In the terminology analysis, the preliminary terms of the maritime license interview and the high-frequency terms of the small vessel educational textbook were identified and the corresponding nonstandard terms were examined. Based on a survey, an expert meeting was held and incorrect Japanese notation, English notation, and the standard language for key terms were presented to analyze which questionnaire was most familiar. The ratio of the use of standard words is relatively high in the case of nautical terms, however, the wrong Japanese notation is used more for engine terms; the analysis results by age and tonnage also generally use the Japanese notation and the use frequency of English notation was determined to be low. Based on this, short- and long-term plans for the use of standard words by small vessel seafarers were proposed, including the production of a standard language dictionary for terms used by these seafarers, a promotion of the importance of using standard terms, active education through educational institutions, and the systematic preparation and implementation of Korean-language education for foreign sailors.

Lee Ungno (1904-1989)'s Theory of Painting and Art Informel Perception in the 1950s (이응노(1904~1989)의 회화론과 1950년대 앵포르멜 미술에 대한 인식)

  • Lee, Janghoon
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.2
    • /
    • pp.172-195
    • /
    • 2019
  • Among the paintings of Goam Lee Ungno (1904-1989), his works of the 1960s in Paris have been evaluated as his most avant-garde works of experimenting with and innovating objects as an artist. At that time, his works, such as Papier Colle and Abstract Letter, were influenced by abstract expressionism and Western Art Informel, illustrating his transformation from a traditional artist into a contemporary artist. An exhibition, which was held prior to his going to Paris in March 1958, has received attention because it exhibited the painting style of his early Informel art. Taking this into consideration, this study was conducted by interpreting his work from two perspectives; first, that his works of 1958 were influenced by abstract expressionism and Art Informel, and, second, that he expressed Xieyi (寫意) as literati painting, focusing on the fact that Lee Ungno first started his career adopting this style. In this paper, I aimed to confirm Lee Ungno's recognition of Art Informel and abstract painting, which can be called abstract expressionism. To achieve this, it was necessary to study Lee's painting theory at that time, so I first considered Hae-gang Kim Gyu-jin whom Lee Ungno began studying painting under, and his paintings during his time in Japan. It was confirmed that in order to escape from stereotypical paintings, deep contemplation of nature while painting was his first important principle. This principle, also known as Xieyi (寫意), lasted until the 1950s. In addition, it is highly probable that he understood the dictionary definition of abstract painting, i.e., the meaning of extracting shapes from nature according to the ideas which became important to him after studying in Japan, rather than the theory of abstract painting realized in Western paintings. Lee Ungno himself also stated that the shape of nature was the basis of abstract painting. In other words, abstractive painting and abstract painting are different concepts and based on this, it is necessary to analyze the paintings of Lee Ungno. Finally, I questioned the view that Lee Ungno's abstract paintings of the 1950s were painted as representative of the Xieyi (寫意) mind of literary art painting. Linking traditional literary art painting theory directly to Lee Ungno, who had been active in other worlds in space and time, may minimize Lee Ungno's individuality and make the distinction between traditional paintings and contemporary paintings obscure. Lee Ungno emphasized Xieyi (寫意) in his paintings; however, this might have been an emphasis signifying a great proposition. This is actually because his works produced in the 1950s, such as Self-Portrait (1956), featured painting styles with boldly distorted forms achieved by strong ink brushwork, a style which Lee Ungno defined as 'North Painting.' This is based on the view that it is necessary to distinguish between Xieyi (寫意) and 'the way of Xieyi (寫意) painting' as an important aspect of literary art painting. Therefore, his paintings need a new interpretation in consideration of the viewpoint that he represented abstract paintings according to his own Xieyi (寫意) way, rather than the view that his paintings were representations of Xieyi (寫意), or rather a succession of traditional paintings in the literary artist's style.

Daesoon Jinrihoe's View of Human Beings (대순진리회의 인간관)

  • Ko, Byoung-chul
    • Journal of the Daesoon Academy of Sciences
    • /
    • v.28
    • /
    • pp.1-34
    • /
    • 2017
  • This paper aims to understand the Daesoon Jinrihoe's view of human beings within the context of Korean religious history. Here, the context of Korean religious history refers to the view that every religion, including its doctrine, ritual, and organization is created in a specific historical context. In accordance with the purpose of this research, this paper consists of three main parts: firstly, chapter 2, 'An approach to the preceding research,' focuses on the previous studies on Daesoon Jinrihoe's view of human beings. In this part, I have divided the previous studies into psychological, philosophical, educational, and comparative approaches. These prior studies show that studies on the view of human beings started with approaches based on psychology and scriptural interpretations which were later extended to philosophical, educational, and comparative fields of study. However, these studies suggest that there are more suitable explanatory factors to explain the view of human beings. Secondly, chapter 3 (Daesoon Jinrihoe's view of human beings) explains the view of human beings through the utilization of six factors. This six factors are as follows: the origin of human beings, components of human beings, the final judgment after death, the independence and subjectivity of human beings, the purpose(s) of life, and the practices of life. In comparison with previous studies, these explanatory factors may contribute to a more specific explanation of the view of human beings. Thirdly, chapter 4 ('Remaining problems') focuses on future research tasks based on the six factors mentioned above. In this part, I pointed out various research tasks that have to be considered in future studies of Daesoon Jinrihoe's view of human beings, especially in connection to other religions. Finally, in the conclusion, I present two tasks for active research on the Daesoon Jinrihoe's view of human beings. One is the task of incorporating the terms related to humanity into Daesoon Jinrihoe's dictionary of scriptural terms. The other is the task of establishing a department to discuss doctrine and related issues.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Influence analysis of Internet buzz to corporate performance : Individual stock price prediction using sentiment analysis of online news (온라인 언급이 기업 성과에 미치는 영향 분석 : 뉴스 감성분석을 통한 기업별 주가 예측)

  • Jeong, Ji Seon;Kim, Dong Sung;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.37-51
    • /
    • 2015
  • Due to the development of internet technology and the rapid increase of internet data, various studies are actively conducted on how to use and analyze internet data for various purposes. In particular, in recent years, a number of studies have been performed on the applications of text mining techniques in order to overcome the limitations of the current application of structured data. Especially, there are various studies on sentimental analysis to score opinions based on the distribution of polarity such as positivity or negativity of vocabularies or sentences of the texts in documents. As a part of such studies, this study tries to predict ups and downs of stock prices of companies by performing sentimental analysis on news contexts of the particular companies in the Internet. A variety of news on companies is produced online by different economic agents, and it is diffused quickly and accessed easily in the Internet. So, based on inefficient market hypothesis, we can expect that news information of an individual company can be used to predict the fluctuations of stock prices of the company if we apply proper data analysis techniques. However, as the areas of corporate management activity are different, an analysis considering characteristics of each company is required in the analysis of text data based on machine-learning. In addition, since the news including positive or negative information on certain companies have various impacts on other companies or industry fields, an analysis for the prediction of the stock price of each company is necessary. Therefore, this study attempted to predict changes in the stock prices of the individual companies that applied a sentimental analysis of the online news data. Accordingly, this study chose top company in KOSPI 200 as the subjects of the analysis, and collected and analyzed online news data by each company produced for two years on a representative domestic search portal service, Naver. In addition, considering the differences in the meanings of vocabularies for each of the certain economic subjects, it aims to improve performance by building up a lexicon for each individual company and applying that to an analysis. As a result of the analysis, the accuracy of the prediction by each company are different, and the prediction accurate rate turned out to be 56% on average. Comparing the accuracy of the prediction of stock prices on industry sectors, 'energy/chemical', 'consumer goods for living' and 'consumer discretionary' showed a relatively higher accuracy of the prediction of stock prices than other industries, while it was found that the sectors such as 'information technology' and 'shipbuilding/transportation' industry had lower accuracy of prediction. The number of the representative companies in each industry collected was five each, so it is somewhat difficult to generalize, but it could be confirmed that there was a difference in the accuracy of the prediction of stock prices depending on industry sectors. In addition, at the individual company level, the companies such as 'Kangwon Land', 'KT & G' and 'SK Innovation' showed a relatively higher prediction accuracy as compared to other companies, while it showed that the companies such as 'Young Poong', 'LG', 'Samsung Life Insurance', and 'Doosan' had a low prediction accuracy of less than 50%. In this paper, we performed an analysis of the share price performance relative to the prediction of individual companies through the vocabulary of pre-built company to take advantage of the online news information. In this paper, we aim to improve performance of the stock prices prediction, applying online news information, through the stock price prediction of individual companies. Based on this, in the future, it will be possible to find ways to increase the stock price prediction accuracy by complementing the problem of unnecessary words that are added to the sentiment dictionary.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Target Word Selection Disambiguation using Untagged Text Data in English-Korean Machine Translation (영한 기계 번역에서 미가공 텍스트 데이터를 이용한 대역어 선택 중의성 해소)

  • Kim Yu-Seop;Chang Jeong-Ho
    • The KIPS Transactions:PartB
    • /
    • v.11B no.6
    • /
    • pp.749-758
    • /
    • 2004
  • In this paper, we propose a new method utilizing only raw corpus without additional human effort for disambiguation of target word selection in English-Korean machine translation. We use two data-driven techniques; one is the Latent Semantic Analysis(LSA) and the other the Probabilistic Latent Semantic Analysis(PLSA). These two techniques can represent complex semantic structures in given contexts like text passages. We construct linguistic semantic knowledge by using the two techniques and use the knowledge for target word selection in English-Korean machine translation. For target word selection, we utilize a grammatical relationship stored in a dictionary. We use k- nearest neighbor learning algorithm for the resolution of data sparseness Problem in target word selection and estimate the distance between instances based on these models. In experiments, we use TREC data of AP news for construction of latent semantic space and Wail Street Journal corpus for evaluation of target word selection. Through the Latent Semantic Analysis methods, the accuracy of target word selection has improved over 10% and PLSA has showed better accuracy than LSA method. finally we have showed the relatedness between the accuracy and two important factors ; one is dimensionality of latent space and k value of k-NT learning by using correlation calculation.

Playing with Rauschenberg: Re-reading Rebus (라우센버그와 게임하기-<리버스> 다시읽기)

  • Rhee, Ji-Eun
    • The Journal of Art Theory & Practice
    • /
    • no.2
    • /
    • pp.27-48
    • /
    • 2004
  • Robert Rauschenberg's artistic career has often been regarded as having reached its culmination when the artist won the first prize at the 1964 Venice Biennale. With this victory, Rauschenberg triumphantly entered the pantheon of all-American artists and firmly secured his position in the history of American art. On the other hand, despite the artist's ongoing new experiments in his art, the seemingly precocious ripeness in his career has led the critical discourses on Rauschenberg's art to the artist's early works, most of which were done in the mid-1950s and the 1960s. The crux of Rauschenberg criticism lies not only in focusing on the artist's 50's and 60's works, but also in its large dismissal of the significance of the imagery that the artist employed in his works. As art historians Roger Cranshaw and Adrian Lewis point out, the critical discourse of Rauschenberg either focuses on the formalist concerns on the picture plane, or relies on the "culturalist" interpretation of Rauschenberg's imagery which emphasizes the artist's "Americanness." Recently, a group of art historians centered around October has applied Charles Sanders Peirce's semiotics as art historical methodology and illuminated the indexical aspects of Rauschenberg's work. The semantic inquiry into Rauschenberg's imagery has also been launched by some art historians who seek the clues in the artist's personal context. The first half of this essay will examine the previous criticism on Rauschenberg's art and the other half will discuss the artist's 1955 work Rebus, which I think intersects various critical concerns of Rauschenberg's work, and yet defies the closure of discourses in one direction. The categories of signs in the semiotics of Charles Sanders Peirce and the discourse of Jean-Francois Lyotard will be used in discussing the meanings of Rebus, not to search for the semantic readings of the work, hut to make an analogy in terms of the paradoxical structures of both the work and the theory. The definitions of rebus is as follows: Rebus 1. a representation or words or syllables by pictures of object or by symbols whose names resemble the intended words or syllables in sound; also: a riddle made up wholly or in part of such pictures or symbols. 2. a badge that suggests the name of the person to whom it belongs. Webster's Third New International Dictionary of the English Language Unabridged. Since its creation in 1955, Robert Rauschenberg's Rebus has been one of the most intriguing works in the artist's oeuvre. This monumental 'combine' painting($6feet{\times}10feet$ 10.5 inches) consists of three panels covered with fabric, paper, newspaper, and printed reproductions. On top of these, oil paints, pencil and crayon drawings connect each section into a whole. The layout of the images is overall horizontal. Starting from a torn election poster, which is partially read as "THAT REPRE," on the far left side of the painting. Rebus leads us to proceed from the left to the right, the typical direction of reading in a Western context. Along with its seemingly proper title. Rebus, the painting has triggered many art historians to seek some semantic readings of it. These art historians painstakingly reconstruct the iconography based on the artist's interviews, (auto)biography, and artistic context of his works. The interpretation of Rebus varies from a 'image-by-image' collation with a word to a more general commentary on Rauschenberg's work overall, such as a work that "bridges between art and life." Despite the title's allusion to the legitimate purpose of the painting as a decoding of the imagery into sound, Rebus, I argue, actually hinders a reading of it. By reading through Peirce to Rauschenberg, I will delve into the subtle anxiety between words and images in their works. And on this basis, I suggest Rauschenberg's strategy in playing Rebus is to hide the meaning of the imagery rather than to disclose it.

  • PDF