• Title/Summary/Keyword: Large Object

Search Result 1,181, Processing Time 0.027 seconds

A Study on the Development of a Home Mess-Cleanup Robot Using an RFID Tag-Floor (RFID 환경을 이용한 홈 메스클린업 로봇 개발에 관한 연구)

  • Kim, Seung-Woo;Kim, Sang-Dae;Kim, Byung-Ho;Kim, Hong-Rae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.2
    • /
    • pp.508-516
    • /
    • 2010
  • An autonomous and automatic home mess-cleanup robot is newly developed in this paper. Thus far, vacuum-cleaners have lightened the burden of household chores but the operational labor that vacuum-cleaners entail has been very severe. Recently, a cleaning robot was commercialized to solve but it also was not successful because it still had the problem of mess-cleanup, which pertained to the clean-up of large trash and the arrangement of newspapers, clothes, etc. Hence, we develop a new home mess-cleanup robot (McBot) to completely overcome this problem. The robot needs the capability for agile navigation and a novel manipulation system for mess-cleanup. The autonomous navigational system has to be controlled for the full scanning of the living room and for the precise tracking of the desired path. It must be also be able to recognize the absolute position and orientation of itself and to distinguish the messed object that is to be cleaned up from obstacles that should merely be avoided. The manipulator, which is not needed in a vacuum-cleaning robot, has the functions of distinguishing the large trash that is to be cleaned from the messed objects that are to be arranged. It needs to use its discretion with regard to the form of the messed objects and to properly carry these objects to the destination. In particular, in this paper, we describe our approach for achieving accurate localization using RFID for home mess-cleanup robots. Finally, the effectiveness of the developed McBot is confirmed through live tests of the mess-cleanup task.

Effects of Air Pollution on the Forest Vegetation Structure in the Vicinity of Sasang Industrial Complex in Korea (사상공단(沙上工團)의 대기오염(大氣汚染)이 주변(周邊) 산림(山林)의 식생구조(植生構造)에 미치는 영향(影響))

  • Kim, Jeom Soo;Lee, Kang Young
    • Journal of Korean Society of Forest Science
    • /
    • v.85 no.1
    • /
    • pp.1-14
    • /
    • 1996
  • The object of this study was to examine the effects of air pollution on forest vegetation structure in the vinicity of Sasang industrial complex in Korea. Forest vegetation structure was investigated at 19 sample plots surrounding industrial complex and at one site away from industrial complex as a control. The results obtained were as follows; 1. For analysis of vegetation structure, upperstory of forests was mostly consisted of Pinus thunbergii, and partly of Alnus firma and Robinia pseudoacacia. In midstory, major components were Pinus thunbergii, Robinia pseudoacacia, Rhus trichocarpa, Rhus chinensis and Styrax japonica, In lower story, Pinus thunbergii was a minor component, while Robinia pseudoacacia, Quercus serrata, Rhus trichocarpa. and Rhododendron yedoense var. poukhanense which were known to be resistant to air pollution were found in large number. Especially, importance percentage of Robinia pseudoacacia was high, while that of Rhododendron mucronulatum was low in surrounding industrial complex. 2. For woody plants, number of species, species diversity and similarity index in industrial complex, were not significantly different from those in control plot. 3. For herbs, Oplismenus undulatifolius appeared in large number in most plots. The $SDR_3$ of Miscanthus sinensis, Calamagrostis arundinacea, Paederia scandens, Spodiopogon cotulifer and Carex humilis were high, but that of Aster scaber, Saussurea seoulensis, Solidago virgaaurea var. asiatica and Prunella vulgaris var. lilacina were low in the vicinity of industrial complex. 4. Number of herb species decreased to below 10 species at surrounding industrial complex as compared to 20 species in the control plot. In addition species diversity, and similarity index in the industrial complex were lower than those in control plot. It may be concluded that Pinus thunbergii forests in industrial complex consists of tree species resistant to air pollution, and that composition of woody vegetation in industrial complex was not much different from control plot, while composition of herbs was already quite different between the two plots. Forest vegetation structure, therefore, may change with time due to air pollution in the industrial complex.

  • PDF

Development of a Classification Method for Forest Vegetation on the Stand Level, Using KOMPSAT-3A Imagery and Land Coverage Map (KOMPSAT-3A 위성영상과 토지피복도를 활용한 산림식생의 임상 분류법 개발)

  • Song, Ji-Yong;Jeong, Jong-Chul;Lee, Peter Sang-Hoon
    • Korean Journal of Environment and Ecology
    • /
    • v.32 no.6
    • /
    • pp.686-697
    • /
    • 2018
  • Due to the advance in remote sensing technology, it has become easier to more frequently obtain high resolution imagery to detect delicate changes in an extensive area, particularly including forest which is not readily sub-classified. Time-series analysis on high resolution images requires to collect extensive amount of ground truth data. In this study, the potential of land coverage mapas ground truth data was tested in classifying high-resolution imagery. The study site was Wonju-si at Gangwon-do, South Korea, having a mix of urban and natural areas. KOMPSAT-3A imagery taken on March 2015 and land coverage map published in 2017 were used as source data. Two pixel-based classification algorithms, Support Vector Machine (SVM) and Random Forest (RF), were selected for the analysis. Forest only classification was compared with that of the whole study area except wetland. Confusion matrixes from the classification presented that overall accuracies for both the targets were higher in RF algorithm than in SVM. While the overall accuracy in the forest only analysis by RF algorithm was higher by 18.3% than SVM, in the case of the whole region analysis, the difference was relatively smaller by 5.5%. For the SVM algorithm, adding the Majority analysis process indicated a marginal improvement of about 1% than the normal SVM analysis. It was found that the RF algorithm was more effective to identify the broad-leaved forest within the forest, but for the other classes the SVM algorithm was more effective. As the two pixel-based classification algorithms were tested here, it is expected that future classification will improve the overall accuracy and the reliability by introducing a time-series analysis and an object-based algorithm. It is considered that this approach will contribute to improving a large-scale land planning by providing an effective land classification method on higher spatial and temporal scales.

A Survey on the Visual Characteristics and Preference of Road Landscape of Traditional Gardens in Suzhou, China based on Rockery Ratio - With a Comparison of Consciousness between Korean and Chinese - (중국 전통원림의 치석피도(置石被度)에 따른 원로경관의 시지각적 특성 분석 - 한국인과 중국인 시지각 비교를 중심으로 -)

  • Kim, Dong-Chan;Park, Yool-Jin;Song, Mei-Jie
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.29 no.4
    • /
    • pp.70-77
    • /
    • 2011
  • This study takes road landscape of traditional Chinese Kangnam gardens in Suzhou as the object. It compares the relations and differences between preferences of Korean and Chinese on road landscapes with different rockery ratios, and studies the differences between Korean and Chinese's adjective visual characteristics of road landscape of traditional gardens and impacts of visual characteristics on preference. The following is the research process: Firstly, the theoretical survey of road landscape of traditional Chinese Kangnam gardens is conducted, pictures of the road landscape of gardens in Suzhou are taken, and 15 pictures are selected based on rockery ratio. Secondly, in order to grasp the visual preference and landscape characteristics of road landscape of garden in Suzhou, 15 pictures and 21 pairs of adjectives are adopted for the questionnaire survey. Thirdly, in order to grasp the differences between preferences of Korean and Chinese on road landscape of traditional Chinese Kangnam gardens, thet-test analysis is conducted. In order to grasp the impacts of rockery ratio on preference, and after the classification of landscape pictures based on rockery occupancy, the average analysis, factor analysis of results of questionnaire survey for Korean and Chinese are conducted respectively. In order to grasp the differences of incentives of landscape preference, the incentive analysis of results of questionnaire survey for Korean and Chinese is carried out. In order to grasp the impacts of various factors on the preference, The results are as follows: The results of analysis of differences between Korean and Chinese's preference on road landscape of traditional Chinese Kangnam gardens show that the overall preference of Chinese is higher than that of Korean. The results of the landscape preference analysis show that the ranking order of average value of Korean and Chinese's preference on rockery ratio categories is: medium ratio, very small ratio, small ratio, large ratio, very large ratio. The results of analysis of relations between rockery ratio of traditional Chinese Kangnam gardens and preference show that the preference increases as the rockery ratio decreases, and the rockery ratio variation causes greater impacts on Korean. Results of the analysis of visual characteristics, factors of visual characteristics of Koreans are "aesthetic factor", "comfort factor", "neat(orderly) factor", and "fun factor". The visual characteristics of Chinese has three factors, namely "psychological factor", "comfort factor", and "neat factor".

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

A Study on Detection Methodology for Influential Areas in Social Network using Spatial Statistical Analysis Methods (공간통계분석기법을 이용한 소셜 네트워크 유력지역 탐색기법 연구)

  • Lee, Young Min;Park, Woo Jin;Yu, Ki Yun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.21-30
    • /
    • 2014
  • Lately, new influentials have secured a large number of volunteers on social networks due to vitalization of various social media. There has been considerable research on these influential people in social networks but the research has limitations on location information of Location Based Social Network Service(LBSNS). Therefore, the purpose of this study is to propose a spatial detection methodology and application plan for influentials who make comments about diverse social and cultural issues in LBSNS using spatial statistical analysis methods. Twitter was used to collect analysis object data and 168,040 Twitter messages were collected in Seoul over a month-long period. In addition, 'politics,' 'economy,' and 'IT' were set as categories and hot issue keywords as given categories. Therefore, it was possible to come up with an exposure index for searching influentials in respect to hot issue keywords, and exposure index by administrative units of Seoul was calculated through a spatial joint operation. Moreover, an influential index that considers the spatial dependence of the exposure index was drawn to extract information on the influential areas at the top 5% of the influential index and analyze the spatial distribution characteristics and spatial correlation. The experimental results demonstrated that spatial correlation coefficient was relatively high at more than 0.3 in same categories, and correlation coefficient between politics category and economy category was also more than 0.3. On the other hand, correlation coefficient between politics category and IT category was very low at 0.18, and between economy category and IT category was also very weak at 0.15. This study has a significance for materialization of influentials from spatial information perspective, and can be usefully utilized in the field of gCRM in the future.

Development of ATSC3.0 based UHDTV Broadcasting System providing Ultra-high-quality Service that supports HDR/WCG Video and 3D Audio, and a Fixed UHD/Mobile HD Service (HDR/WCG 비디오와 3D 오디오를 지원하는 초고품질 방송서비스와 고정 UHD/이동 HD 방송 서비스를 제공하는 ATSC 3.0 기반 UHDTV 방송 시스템 개발)

  • Ki, Myungseok;Seok, Jinwuk;Beack, Seungkwon;Jang, Daeyoung;Lee, Taejin;Kim, Hui Yong;Oh, Hyeju;Lim, Bo-mi;Bae, Byungjun;Kim, Heung Mook;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.829-849
    • /
    • 2017
  • Due to the large-scale TV display, the convergence of broadcasting and broadband, and the advancement of signal compression and transmission technology, terrestrial digital broadcasting has evolved into UHD broadcasting capable of providing simultaneous broadcasting of fixed UHD and mobile HD. The Korean standard for terrestrial UHDTV broadcasting is based on ATSC 3.0, the broadcasting standard of North America. The terrestrial UHDTV broadcasting standard chose that as a new AV codec standard, HEVC video codec which can compress with higher efficiency compared to AVC, and MPEG-H 3D audio codec for realistic audio. Also, DASH and MMT are adopted as transmission format instead of MPEG-2 TS to support broadband as well as broadcasting network, and in order to provide 4K UHD/mobile HD service simultaneously ROUTE multiplexing technology is applied. In this paper, we propose an audio/video encoder, which is required to provide HDR/WCG supported high quality video service, 10.2 channel/4 object supporting stereo sound service, fixed UHD and mobile HD simultaneous broadcasting service based on ATSC3.0, also we implemented the ATSC 3.0 LDM system for ROUTE/DASH packager, multiplexing system and physical layer transmission/reception, and verified the service ability by applying it to real time broadcast environment.

Digital painting: Image transfonnation, simulation, heterologie and transfonnation (현대회화에서의 형태와 물질 -Digital Transfiguration에 관한 연구-)

  • Jeong, Suk-Yeong
    • Journal of Science of Art and Design
    • /
    • v.10
    • /
    • pp.161-181
    • /
    • 2006
  • The words which appeared in my theoretical study and work are image transformation to digital painting, simulation, heterologie and transfiguration, etc. Firstly, let's look into 'digital era' or 'new media era'. Nowadays, the image world including painting within the rapid social and cultural change, which is called as digital era, is having the dramatic change. Together with the development of scientific technology, large number of events which was deemed to be impossible is happening as real in image world Moreover, these changes in image world is greatly influencing to our life. The word which compresses this change of image world and shows is 'digital'. Digit, which means fingers in Latin, indicates separately changing signal, and to be more narrow, it indicates the continual signal of '0' and ' 1' in computer. The opposite word is 'analogue'. As analogue is the word meaning 'infer' or 'similarity', it indicates the signal or form which continuously changes along the series of time when it is compared to digital. Instead of analogue, digital is embossed as a major ruler along the whole area of our current culture. In whole culture and art area, and in whole generalscience, digital is appearing as it has the modernism and importance. The prefix, 'digital', e.g. digital media, digital culture, digital design, digital philosophy, etc, is treated as the synonym of modernism and something new. This advent of digital results the innovative change to the image world, creates the new beauty experience which we could not experience before, and forecasts the formation of advanced art and expansion of creative area. Various intellectual activities using computer is developing the whole world with making the infrastructure. Computer in painting work immediately accomplishes the idea of painters, takes part in simulation work, contingency such as abrupt reversal, extraction, twisting, shaking, obscureness, overlapping, etc, and timing to stimulate the creativity of painters, and provides digital formative language which enables new visual experience to the audience. When the change of digital era, the image appeared in my work is shown in 'transfiguration' like drawing. The word, 'transfiguration' does not indicate the completed and fixed real substance but indicate endlessly moving and floating shape. Thus, this concept is opposite to the substantial consideration, so that various concepts which is able to replace this in accordance with the similar cases are also exist such as change, deterioration, mutation, deformity of appearance and morphing which is frequently used in computer as a technical word. These concepts are not clearly classified, and variably and complicatedly related. Transfiguration basically means the denial of "objectivity' and '(continual) stagnation' or deviation from those. This phenomenon is appeared through the all art schools of art ever since the realism is denied in the 19th century. It is called as 'deformation' in case of expressionism, futurism, cubism, etc, in the beginning of the century, which its former indication is mostly preserved within the process of structural deviation and which has the realistic limit which should be preserved. On the contrary, dramatic transfiguration which has been showing in the modern era through surrealism is different in the point that dramatic transfiguration tends to show the deterioration and deviation rather than the preservation of indicated object. From this point, transfiguration coming out from morphing using computer deteriorates and hides the reality and furthermore, it replaces the 'reality'. Moreover, transfiguration is closely approached to the world of fake or 'imaginary' simulation world of Baudrillard. According to Baudrillard, the image hides and deteriorates the reality, and furthermore, expresses 'not existing' to 'imaginary' under the name of transfiguration. Certain reality, that is, image which is absent from the reality is created and overflowed, so that it finally replaces the reality. This is simulation as it is said by Baudrillard. In turn, Georges Bataille discusses about the image which is produced by digital technology in terms of heterologie. Image of heterologie is the visual signal which is established with the media. Image of media is to have the continuous characteristics of produce, extinction, and transformation, and its clear boundary between images becomes meaningless. The meaning of composition, excess, violation, etc of digital image is explained to heterological study or heteologie suggested as important meaning of Georges Bataille who is a heretic philosopher. As the form and image of mutation shows the shape in accordance with mechanical production, heterologie is introduced as very low materialism (or bas materialisme), in this theory. Heterologie as low materialism which is gradually changing is developing as a different concept and analysis because of the change of time in the late 20s century beside high or low meaning. Including my image, all images non-standardizes and transforms the code. However, reappearance and non-standardization of this code does not seem to be simple. The problem of transformation caused by transfiguration which appears in my digital drawing painting, simulation, heterologie, etc, are the continual problems. Moreover, the subject such as existence of human being, distance from the real life, politics and social problems are being extended to actual research and various expressing work. Especially, individual image world is established by digital painting transfiguration technique, and its change and review start to have the durability. The consciousness of observers who look at the image is changing the subject. Together with theoretical research, researchers are to establish the first step to approach to various image change of digital era painting through transfiguration technique using our realistic and historical image.

  • PDF