• Title/Summary/Keyword: 인공지능 음악

Search Result 35, Processing Time 0.018 seconds

Development of the Artwork using Music Visualization based on Sentiment Analysis of Lyrics (가사 텍스트의 감성분석에 기반 한 음악 시각화 콘텐츠 개발)

  • Kim, Hye-Ran
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.10
    • /
    • pp.89-99
    • /
    • 2020
  • In this study, we tried to produce moving-image works through sentiment analysis of music. First, Google natural language API was used for the sentiment analysis of lyrics, then the result was applied to the image visualization rules. In prior engineering researches, text-based sentiment analysis has been conducted to understand users' emotions and attitudes by analyzing users' comments and reviews in social media. In this study, the data was used as a material for the creation of artworks so that it could be used for aesthetic expressions. From the machine's point of view, emotions are substituted with numbers, so there is a limit to normalization and standardization. Therefore, we tried to overcome these limitations by linking the results of sentiment analysis of lyrics data with the rules of formative elements in visual arts. This study aims to transform existing traditional art works such as literature, music, painting, and dance to a new form of arts based on the viewpoint of the machine, while reflecting the current era in which artificial intelligence even attempts to create artworks that are advanced mental products of human beings. In addition, it is expected that it will be expanded to an educational platform that facilitates creative activities, psychological analysis, and communication for people with developmental disabilities who have difficulty expressing emotions.

User Experience Analysis and Management Based on Text Mining: A Smart Speaker Case (텍스트 마이닝 기반 사용자 경험 분석 및 관리: 스마트 스피커 사례)

  • Dine Yeon;Gayeon Park;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.22 no.2
    • /
    • pp.77-99
    • /
    • 2020
  • Smart speaker is a device that provides an interactive voice-based service that can search and use various information and contents such as music, calendar, weather, and merchandise using artificial intelligence. Since AI technology provides more sophisticated and optimized services to users by accumulating data, early smart speaker manufacturers tried to build a platform through aggressive marketing. However, the frequency of using smart speakers is less than once a month, accounting for more than one third of the total, and user satisfaction is only 49%. Accordingly, the necessity of strengthening the user experience of smart speakers has emerged in order to acquire a large number of users and to enable continuous use. Therefore, this study analyzes the user experience of the smart speaker and proposes a method for enhancing the user experience of the smart speaker. Based on the analysis results in two stages, we propose ways to enhance the user experience of smart speakers by model. The existing research on the user experience of the smart speaker was mainly conducted by survey and interview-based research, whereas this study collected the actual review data written by the user. Also, this study interpreted the analysis result based on the smart speaker user experience dimension. There is an academic significance in interpreting the text mining results by developing the smart speaker user experience dimension. Based on the results of this study, we can suggest strategies for enhancing the user experience to smart speaker manufacturers.

Integrated Arts Education Program with AI Literacy

  • Jihye Kim;SunKwan Han
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.12
    • /
    • pp.281-288
    • /
    • 2023
  • This study aimed to develop an integrated arts education program for improving AI literacy among elementary school students. First, we developed two thematic programs that are research on the goals of the art, music, physical curriculum in the 2022 revised elementary school curriculum, and a matrix of goals and elements of integrated art education. The developed program was revised and supplemented through the first expert validity test, and the second revision was made based on the results of students' AI literacy pre/post-test and satisfaction survey with the program. Finally, the final program was developed through the third expert validity test. We hope that the developed program will be used as a convergence education program to cultivate AI literacy in elementary school students.

AI Art Creation Case Study for AI Film & Video Content (AI 영화영상콘텐츠를 위한 AI 예술창작 사례연구)

  • Jeon, Byoungwon
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.2
    • /
    • pp.85-95
    • /
    • 2021
  • Currently, we stand between computers as creative tools and computers as creators. A new genre of movies, which can be called a post-cinema situation, is emerging. This paper aims to diagnose the possibility of the emergence of AI cinema. To confirm the possibility of AI cinema, it was examined through a case study whether the creation of a story, narrative, image, and sound, which are necessary conditions for film creation, is possible by artificial intelligence. First, we checked the visual creation of AI painting algorithms Obvious, GAN, and CAN. Second, AI music has already entered the distribution stage in the market in cooperation with humans. Third, AI can already complete drama scripts, and automatic scenario creation programs using big data are also gaining popularity. That said, we confirmed that the filmmaking requirements could be met with AI algorithms. From the perspective of Manovich's 'AI Genre Convention', web documentaries and desktop documentaries, typical trends post-cinema, can be said to be representative genres that can be expected as AI cinemas. The conditions for AI, web documentaries and desktop documentaries to exist are the same. This article suggests a new path for the media of the 4th Industrial Revolution era through research on AI as a creator of post-cinema.

SAAnnot-C3Pap: Ground Truth Collection Technique of Playing Posture Using Semi Automatic Annotation Method (SAAnnot-C3Pap: 반자동 주석화 방법을 적용한 연주 자세의 그라운드 트루스 수집 기법)

  • Park, So-Hyun;Kim, Seo-Yeon;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.10
    • /
    • pp.409-418
    • /
    • 2022
  • In this paper, we propose SAAnnot-C3Pap, a semi-automatic annotation method for obtaining ground truth of a player's posture. In order to obtain ground truth about the two-dimensional joint position in the existing music domain, openpose, a two-dimensional posture estimation method, was used or manually labeled. However, automatic annotation methods such as the existing openpose have the disadvantages of showing inaccurate results even though they are fast. Therefore, this paper proposes SAAnnot-C3Pap, a semi-automated annotation method that is a compromise between the two. The proposed approach consists of three main steps: extracting postures using openpose, correcting the parts with errors among the extracted parts using supervisely, and then analyzing the results of openpose and supervisely. Perform the synchronization process. Through the proposed method, it was possible to correct the incorrect 2D joint position detection result that occurred in the openpose, solve the problem of detecting two or more people, and obtain the ground truth in the playing posture. In the experiment, we compare and analyze the results of the semi-automated annotation method openpose and the SAAnnot-C3Pap proposed in this paper. As a result of comparison, the proposed method showed improvement of posture information incorrectly collected through openpose.