• Title/Summary/Keyword: Artificial Intelligence Art

Search Result 161, Processing Time 0.022 seconds

Application and Analysis of Remote Sensing Data for Disaster Management in Korea - Focused on Managing Drought of Reservoir Based on Remote Sensing - (국가 재난 관리를 위한 원격탐사 자료 분석 및 활용 - 원격탐사기반 저수지 가뭄 관리를 중심으로 -)

  • Kim, Seongsam;Lee, Junwoo;Koo, Seul;Kim, Yongmin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_3
    • /
    • pp.1749-1760
    • /
    • 2022
  • In modern society, human and social damages caused by natural disasters and frequent disaster accidents have been increased year by year. Prompt access to dangerous disaster sites that are inaccessible or inaccessible using state-of-the-art Earth observation equipment such as satellites, drones, and survey robots, and timely collection and analysis of meaningful disaster information. It can play an important role in protecting people's property and life throughout the entire disaster management cycle, such as responding to disaster sites and establishing mid-to long-term recovery plans. This special issue introduces the National Disaster Management Research Institute (NDMI)'s disaster management technology that utilizes various Earth observation platforms, such as mobile survey vehicles equipped with close-range disaster site survey sensors, drones, and survey robots, as well as satellite technology, which is a tool of remote earth observation. Major research achievements include detection of damage from water disasters using Google Earth Engine, mid- and long-term time series observation, detection of reservoir water bodies using Sentinel-1 Synthetic Aperture Radar (SAR) images and artificial intelligence, analysis of resident movement patterns in case of forest fire disasters, and data analysis of disaster safety research. Efficient integrated management and utilization plan research results are summarized. In addition, research results on scientific investigation activities on the causes of disasters using drones and survey robots during the investigation of inaccessible and dangerous disaster sites were described.

A review on urban inundation modeling research in South Korea: 2001-2022 (도시침수 모의 기술 국내 연구동향 리뷰: 2001-2022)

  • Lee, Seungsoo;Kim, Bomi;Choi, Hyeonjin;Noh, Seong Jin
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.10
    • /
    • pp.707-721
    • /
    • 2022
  • In this study, a state-of-the-art review on urban inundation simulation technology was presented summarizing major achievements and limitations, and future research recommendations and challenges. More than 160 papers published in major domestic academic journals since the 2000s were analyzed. After analyzing the core themes and contents of the papers, the status of technological development was reviewed according to simulation methodologies such as physically-based and data-driven approaches. In addition, research trends for application purposes and advances in overseas and related fields were analyzed. Since more than 60% of urban inundation research used Storm Water Management Model (SWMM), developing new modeling techniques for detailed physical processes of dual drainage was encouraged. Data-based approaches have become a new status quo in urban inundation modeling. However, given that hydrological extreme data is rare, balanced research development of data and physically-based approaches was recommended. Urban inundation analysis technology, actively combined with new technologies in other fields such as artificial intelligence, IoT, and metaverse, would require continuous support from society and holistic approaches to solve challenges from climate risk and reduce disaster damage.

AI Art Creation Case Study for AI Film & Video Content (AI 영화영상콘텐츠를 위한 AI 예술창작 사례연구)

  • Jeon, Byoungwon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.85-95
    • /
    • 2021
  • Currently, we stand between computers as creative tools and computers as creators. A new genre of movies, which can be called a post-cinema situation, is emerging. This paper aims to diagnose the possibility of the emergence of AI cinema. To confirm the possibility of AI cinema, it was examined through a case study whether the creation of a story, narrative, image, and sound, which are necessary conditions for film creation, is possible by artificial intelligence. First, we checked the visual creation of AI painting algorithms Obvious, GAN, and CAN. Second, AI music has already entered the distribution stage in the market in cooperation with humans. Third, AI can already complete drama scripts, and automatic scenario creation programs using big data are also gaining popularity. That said, we confirmed that the filmmaking requirements could be met with AI algorithms. From the perspective of Manovich's 'AI Genre Convention', web documentaries and desktop documentaries, typical trends post-cinema, can be said to be representative genres that can be expected as AI cinemas. The conditions for AI, web documentaries and desktop documentaries to exist are the same. This article suggests a new path for the media of the 4th Industrial Revolution era through research on AI as a creator of post-cinema.

Relative Importance Analysis of Management Level Diagnosis for Consignee's Personal Information Protection (수탁사 개인정보 관리 수준 점검 항목의 상대적 중요도 분석)

  • Im, DongSung;Lee, Sang-Joon
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.2
    • /
    • pp.1-11
    • /
    • 2018
  • Recently ICT, new technologies such as IoT, Cloud, and Artificial Intelligence are changing the information society explosively. But personal information leakage incidents of consignee's company are increasing more and more because of the expansion of consignment business and the latest threats such as Ransomware and APT. Therefore, in order to strengthen the security of consignee's company, this study derived the checklists through the analysis of the status such as the feature of consignment and the security standard management system and precedent research. It also analyzed laws related to consignment. Finally we found out the relative importance of checklists after it was applied to proposed AHP(Analytic Hierarchy Process) Model. Relative importance was ranked as establishment of an internal administration plan, privacy cryptography, life cycle, access authority management and so on. The purpose of this study is to reduce the risk of leakage of customer information and improve the level of personal information protection management of the consignee by deriving the check items required in handling personal information of consignee and demonstrating the model. If the inspection activities are performed considering the relative importance of the checklist items, the effectiveness of the input time and cost will be enhanced.

A Study on A Study on the University Education Plan Using ChatGPTfor University Students (ChatGPT를 활용한 대학 교육 방안 연구)

  • Hyun-ju Kim;Jinyoung Lee
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.71-79
    • /
    • 2024
  • ChatGPT, an interactive artificial intelligence (AI) chatbot developed by Open AI in the U.S., gaining popularity with great repercussions around the world. Some academia are concerned that ChatGPT can be used by students for plagiarism, but ChatGPT is also widely used in a positive direction, such as being used to write marketing phrases or website phrases. There is also an opinion that ChatGPT could be a new future for "search," and some analysts say that the focus should be on fostering rather than excessive regulation. This study analyzed consciousness about ChatGPT for college students through a survey of their perception of ChatGPT. And, plagiarism inspection systems were prepared to establish an education support model using ChatGPT and ChatGPT. Based on this, a university education support model using ChatGPT was constructed. The education model using ChatGPT established an education model based on text, digital, and art, and then composed of detailed strategies necessary for the era of the 4th industrial revolution below it. In addition, it was configured to guide students to use ChatGPT within the permitted range by using the ChatGPT detection function provided by the plagiarism inspection system, after the instructor of the class determined the allowable range of content generated by ChatGPT according to the learning goal. By linking and utilizing ChatGPT and the plagiarism inspection system in this way, it is expected to prevent situations in which ChatGPT's excellent ability is abused in education.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Autopoietic Machinery and the Emergence of Third-Order Cybernetics (자기생산 기계 시스템과 3차 사이버네틱스의 등장)

  • Lee, Sungbum
    • Cross-Cultural Studies
    • /
    • v.52
    • /
    • pp.277-312
    • /
    • 2018
  • First-order cybernetics during the 1940s and 1950s aimed for control of an observed system, while second-order cybernetics during the mid-1970s aspired to address the mechanism of an observing system. The former pursues an objective, subjectless, approach to a system, whereas the latter prefers a subjective, personal approach to a system. Second-order observation must be noted since a human observer is a living system that has its unique cognition. Maturana and Varela place the autopoiesis of this biological system at the core of second-order cybernetics. They contend that an autpoietic system maintains, transforms and produces itself. Technoscientific recreation of biological autopoiesis opens up to a new step in cybernetics: what I describe as third-order cybernetics. The formation of technoscientific autopoiesis overlaps with the Fourth Industrial Revolution or what Erik Brynjolfsson and Andrew McAfee call the Second Machine Age. It leads to a radical shift from human centrism to posthumanity whereby humanity is mechanized, and machinery is biologized. In two versions of the novel Demon Seed, American novelist Dean Koontz explores the significance of technoscientific autopoiesis. The 1973 version dramatizes two kinds of observers: the technophobic human observer and the technology-friendly machine observer Proteus. As the story concludes, the former dominates the latter with the result that an anthropocentric position still works. The 1997 version, however, reveals the victory of the techno-friendly narrator Proteus over the anthropocentric narrator. Losing his narrational position, the technophobic human narrator of the story disappears. In the 1997 version, Proteus becomes the subject of desire in luring divorcee Susan. He longs to flaunt his male egomaniac. His achievement of male identity is a sign of technological autopoiesis characteristic of third-order cybernetics. To display self-producing capabilities integral to the autonomy of machinery, Koontz's novel demonstrates that Proteus manipulates Susan's egg to produce a human-machine mixture. Koontz's demon child, problematically enough, implicates the future of eugenics in an era of technological autopoiesis. Proteus creates a crossbreed of humanity and machinery to engineer a perfect body and mind. He fixes incurable or intractable diseases through genetic modifications. Proteus transfers a vast amount of digital information to his offspring's brain, which enables the demon child to achieve state-of-the-art intelligence. His technological editing of human genes and consciousness leads to digital standardization through unanimous spread of the best qualities of humanity. He gathers distinguished human genes and mental status much like collecting luxury brands. Accordingly, Proteus's child-making project ultimately moves towards technologically-controlled eugenics. Pointedly, it disturbs the classical ideal of liberal humanism celebrating a human being as the master of his or her nature.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

New Insights on Mobile Location-based Services(LBS): Leading Factors to the Use of Services and Privacy Paradox (모바일 위치기반서비스(LBS) 관련한 새로운 견해: 서비스사용으로 이끄는 요인들과 사생활염려의 모순)

  • Cheon, Eunyoung;Park, Yong-Tae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.33-56
    • /
    • 2017
  • As Internet usage is becoming more common worldwide and smartphone become necessity in daily life, technologies and applications related to mobile Internet are developing rapidly. The results of the Internet usage patterns of consumers around the world imply that there are many potential new business opportunities for mobile Internet technologies and applications. The location-based service (LBS) is a service based on the location information of the mobile device. LBS has recently gotten much attention among many mobile applications and various LBSs are rapidly developing in numerous categories. However, even with the development of LBS related technologies and services, there is still a lack of empirical research on the intention to use LBS. The application of previous researches is limited because they focused on the effect of one particular factor and had not shown the direct relationship on the intention to use LBS. Therefore, this study presents a research model of factors that affect the intention to use and actual use of LBS whose market is expected to grow rapidly, and tested it by conducting a questionnaire survey of 330 users. The results of data analysis showed that service customization, service quality, and personal innovativeness have a positive effect on the intention to use LBS and the intention to use LBS has a positive effect on the actual use of LBS. These results implies that LBS providers can enhance the user's intention to use LBS by offering service customization through the provision of various LBSs based on users' needs, improving information service qualities such as accuracy, timeliness, sensitivity, and reliability, and encouraging personal innovativeness. However, privacy concerns in the context of LBS are not significantly affected by service customization and personal innovativeness and privacy concerns do not significantly affect the intention to use LBS. In fact, the information related to users' location collected by LBS is less sensitive when compared with the information that is used to perform financial transactions. Therefore, such outcomes on privacy concern are revealed. In addition, the advantages of using LBS are more important than the sensitivity of privacy protection to the users who use LBS than to the users who use information systems such as electronic commerce that involves financial transactions. Therefore, LBS are recommended to be treated differently from other information systems. This study is significant in the theoretical point of contribution that it proposed factors affecting the intention to use LBS in a multi-faceted perspective, proved the proposed research model empirically, brought new insights on LBS, and broadens understanding of the intention to use and actual use of LBS. Also, the empirical results of the customization of LBS affecting the user's intention to use the LBS suggest that the provision of customized LBS services based on the usage data analysis through utilizing technologies such as artificial intelligence can enhance the user's intention to use. In a practical point of view, the results of this study are expected to help LBS providers to develop a competitive strategy for responding to LBS users effectively and lead to the LBS market grows. We expect that there will be differences in using LBSs depending on some factors such as types of LBS, whether it is free of charge or not, privacy policies related to LBS, the levels of reliability related application and technology, the frequency of use, etc. Therefore, if we can make comparative studies with those factors, it will contribute to the development of the research areas of LBS. We hope this study can inspire many researchers and initiate many great researches in LBS fields.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.