• Title/Summary/Keyword: 관객반응

Search Result 66, Processing Time 0.021 seconds

음성인식 기반 인터렉티브 미디어아트의 연구 - 소리-시각 인터렉티브 설치미술 "Water Music" 을 중심으로-

  • Lee, Myung-Hak;Jiang, Cheng-Ri;Kim, Bong-Hwa;Kim, Kyu-Jung
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.354-359
    • /
    • 2008
  • This Audio-Visual Interactive Installation is composed of a video projection of a video Projection and digital Interface technology combining with the viewer's voice recognition. The Viewer can interact with the computer generated moving images growing on the screen by blowing his/her breathing or making sound. This symbiotic audio and visual installation environment allows the viewers to experience an illusionistic spacephysically as well as psychologically. The main programming technologies used to generate moving water waves which can interact with the viewer in this installation are visual C++ and DirectX SDK For making water waves, full-3D rendering technology and particle system were used.

  • PDF

A study of the use of media in modern performance (현대 연극공연에서의 영상 미디어활용에 관한 연구)

  • Lee, Jae-Joong;Kim, Hyeong-Gi
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02b
    • /
    • pp.484-488
    • /
    • 2007
  • 연극이라는 것은 복합적인 표현예술장르로 하나의 요소가 아닌 여러 가지의 요소들아 합쳐져 연극이라는 예술작품을 만들게 되어 있다. 현대의 연극공연에서 영상의 비중은 날로 높아지고 있다. 무대 공간의 공간감, 협소함, 표현의 제약 등을 따져 볼 때 그것은 당연한 이유일지도 모른다. 연극이라는 것은 최종적으로 배우와 관객과 함께 호흡할 때 완성되는 것으로 배우의 연기에 따라 영상도 실시간으로 반응할 수 있어야한다. 이러기 위해서는 영상뿐만 아니라 배우와 인터렉션할 수 있는 영상의 제어 또한 상당부분 생각하지 않을 수 없다. 그리고 무대의 뜨거운 조명과 그 아래의 영상은 같은 빛으로 만들어진 작품이기 때문에 빛이라는 환경적 요인에서 벋어나지 못한다. 그 밖의 스크린의 존재와 무대의 조형물과의 조화 등도 생각해야 한다. 하지만, 연극의 성공은 관객의 몰입정도에 따라 결정된다. 몰입요소는 음악, 배우, 무대 등의 기존 연극요소가 있으며 여기에 영상 활용은 관객을 더욱 연극으로 몰입시킬 수 있는 또 다른 중요 표현방법이 된다. 그 영상의 활용방안과 문제점은 현재 여러 기술과 방법으로 해결되어지고 있으며, 연극에서 사용되는 영상은 더욱 발전 가능성과 활용 가치를 가지고 있다. 본 논문 이러한 대중문화인 연극에서 공간에서의 영상 미디어 활용의 중요성과 관객의 몰입에 관한 발전 가능성에 대한 연구이다.

  • PDF

A Study on Audio-Visual Interactive Art interacting with Sound -Focused on 21C Boogie Woogie (사운드에 반응하는 시청각적인 인터랙티브 아트에 관한 연구)

  • Son, Jin-Seok;Yang, Jee-Hyun;Kim, Kyu-Jung
    • Cartoon and Animation Studies
    • /
    • s.35
    • /
    • pp.329-346
    • /
    • 2014
  • Art is the product from the combination of politics, economy, and social and cultural aspects. Recent development of digital media has affected on the expansion of visual expression in art. Digital media allow artists to use sound and physical interaction as well as image as an plastic element for making a work of art. Also, digital media help artists create an interactive, synaesthetic and visual perceptive environment by combining viewers' physical interaction with the reconstruction of image, sound, light, and among other plastic elements. This research was focused on the analysis of the relationship between images in art work and the viewer and data visualization using sound from the perspective of visual perception. This research also aimed to develop an interactive art by visualizing physical data with sound generating from outer stimulus or the viewer. Physical data generating from outer sound can be analyzed in various aspects. For example, Sound data can be analyzed and sampled within pitch, volume, frequency, and etc. This researcher implemented a new form of media art through the visual experiment of LED light triggered by sound frequency generating from viewers' voice or outer physical stimulus. Also, this researcher explored the possibility of various visual image expression generating from the viewer's reaction to illusionary characteristics of light(LED), which can be transformed within external physical data in real time. As the result, this researcher used a motif from Piet Mondrian's Broadway Boogie Woogie in order to implement a visual perceptive interactive work reacting with sound. Mondrian tried to approach at the essence of visual object by eliminating unnecessary representation elements and simplifying them in painting and making them into abstraction consisting of color, vertical and horizontal lines. This researcher utilized Modrian's simplified visual composition as a representation metaphor in oder to transform external sound stimulus into the element of light(LED), and implemented an environment inducing viewers' participation, which is a dynamic composition maximizing a synaesthetic expression, differing from Modrian's static composition.

Sustaining Dramatic Communication Between the Audience and Characters through a Realization : (관객과 인물의 극적소통을 위한 사실화연구 : 영화 '시'를 중심으로)

  • Kim, Dong-Hyun
    • Cartoon and Animation Studies
    • /
    • s.24
    • /
    • pp.173-197
    • /
    • 2011
  • Through a story, the audience moves between fiction and reality. A story is an emotional experience that appeals to human feeling. The rational function of a story is to convey knowledge and information, and its emotional function is to touch the audience. Moreover, these aspects of a story are linked to its language, text, and imagery. This paper focuses on the emotional function of a story. In a experiential story, the audience's emotional response is a result of maximum dramatic communication between them and the characters. Through psychological and mental communion with the characters, the audience becomes immersed in the story when they emotionally identify with the characters, and dramatic communication is achieved. However, dramatic communication is mostly achieved instantaneously. The elements of a film need to be realized to sustain dramatic communication such that the audience continues to be immersed in the story. The audience can identify with the characters who are placed in real-life situations by considering the characters' external and internal aspects. External search pertains to the tangible aspects of the character such as its background, life, and conversation. Through the audience's external search, the characters communicate with the audience. Internal search deals with aspects of the characters' personality such as their self-concept, desires, and internal conflicts. Through internal search, the audience understands the inner side of the characters. In this process, a film director should ensure that the acting depicts the inner side of the characters. In other words, the director should perfectly depict the internal and external elements of a human on screen. Appropriate visualization can lead to dramatic communication with the characters and thereby create the audience's emotional response. Considering these techniques, this paper focuses on the scenes of the film "Poetry" in which dramatic communication with the characters creates the audience's emotional response. Accordingly, the audience plays a role in sustaining dramatic communication for the physical screen time of a film.

The Intelligent Determination Model of Audience Emotion for Implementing Personalized Exhibition (개인화 전시 서비스 구현을 위한 지능형 관객 감정 판단 모형)

  • Jung, Min-Kyu;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.39-57
    • /
    • 2012
  • Recently, due to the introduction of high-tech equipment in interactive exhibits, many people's attention has been concentrated on Interactive exhibits that can double the exhibition effect through the interaction with the audience. In addition, it is also possible to measure a variety of audience reaction in the interactive exhibition. Among various audience reactions, this research uses the change of the facial features that can be collected in an interactive exhibition space. This research develops an artificial neural network-based prediction model to predict the response of the audience by measuring the change of the facial features when the audience is given stimulation from the non-excited state. To present the emotion state of the audience, this research uses a Valence-Arousal model. So, this research suggests an overall framework composed of the following six steps. The first step is a step of collecting data for modeling. The data was collected from people participated in the 2012 Seoul DMC Culture Open, and the collected data was used for the experiments. The second step extracts 64 facial features from the collected data and compensates the facial feature values. The third step generates independent and dependent variables of an artificial neural network model. The fourth step extracts the independent variable that affects the dependent variable using the statistical technique. The fifth step builds an artificial neural network model and performs a learning process using train set and test set. Finally the last sixth step is to validate the prediction performance of artificial neural network model using the validation data set. The proposed model is compared with statistical predictive model to see whether it had better performance or not. As a result, although the data set in this experiment had much noise, the proposed model showed better results when the model was compared with multiple regression analysis model. If the prediction model of audience reaction was used in the real exhibition, it will be able to provide countermeasures and services appropriate to the audience's reaction viewing the exhibits. Specifically, if the arousal of audience about Exhibits is low, Action to increase arousal of the audience will be taken. For instance, we recommend the audience another preferred contents or using a light or sound to focus on these exhibits. In other words, when planning future exhibitions, planning the exhibition to satisfy various audience preferences would be possible. And it is expected to foster a personalized environment to concentrate on the exhibits. But, the proposed model in this research still shows the low prediction accuracy. The cause is in some parts as follows : First, the data covers diverse visitors of real exhibitions, so it was difficult to control the optimized experimental environment. So, the collected data has much noise, and it would results a lower accuracy. In further research, the data collection will be conducted in a more optimized experimental environment. The further research to increase the accuracy of the predictions of the model will be conducted. Second, using changes of facial expression only is thought to be not enough to extract audience emotions. If facial expression is combined with other responses, such as the sound, audience behavior, it would result a better result.

A Study of Breath and Utterance for Jazz Vocalist (JAZZ VOCALIST를 위한 호흡과 발성에 관한 연구 -지성으로 노래하기 중심으로-)

  • Kim, Hye-Yeon;Cho, Tae-Seon
    • Proceedings of the KAIS Fall Conference
    • /
    • 2011.12a
    • /
    • pp.50-53
    • /
    • 2011
  • 재즈음악이나 재즈콘서트는 근래에 들어서 가장 접하기 쉬운 문화예술의 한 장르이다. 재즈 공연을 보기 위해 많은 관객들이 찾고 있으며 이러한 공연은 현재의 대표적인 공연 문화로 자리 매김하고 있다. 그에 반하여 음악연주 자체에 대한 평가나 관객의 반응에 대한 평론이 비전문가적인 부분이 많아서 좀 더 전문성을 갖춘 보컬연주의 이해와 중요성에 대한 교육이 필요하게 되었다. 본 논문은 재즈를 노래하는 사람이 표현하고자 하는 모든 부분을 자유롭고 말하듯이 편안하게 노래하는(지성으로 노래하기) 발성방법과 호흡에 대한 연구이다.

  • PDF

Multimodal Emotional State Estimation Model for Implementation of Intelligent Exhibition Services (지능형 전시 서비스 구현을 위한 멀티모달 감정 상태 추정 모형)

  • Lee, Kichun;Choi, So Yun;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.1-14
    • /
    • 2014
  • Both researchers and practitioners are showing an increased interested in interactive exhibition services. Interactive exhibition services are designed to directly respond to visitor responses in real time, so as to fully engage visitors' interest and enhance their satisfaction. In order to install an effective interactive exhibition service, it is essential to adopt intelligent technologies that enable accurate estimation of a visitor's emotional state from responses to exhibited stimulus. Studies undertaken so far have attempted to estimate the human emotional state, most of them doing so by gauging either facial expressions or audio responses. However, the most recent research suggests that, a multimodal approach that uses people's multiple responses simultaneously may lead to better estimation. Given this context, we propose a new multimodal emotional state estimation model that uses various responses including facial expressions, gestures, and movements measured by the Microsoft Kinect Sensor. In order to effectively handle a large amount of sensory data, we propose to use stratified sampling-based MRA (multiple regression analysis) as our estimation method. To validate the usefulness of the proposed model, we collected 602,599 responses and emotional state data with 274 variables from 15 people. When we applied our model to the data set, we found that our model estimated the levels of valence and arousal in the 10~15% error range. Since our proposed model is simple and stable, we expect that it will be applied not only in intelligent exhibition services, but also in other areas such as e-learning and personalized advertising.

Comparison of audience response between virtual exhibition and on-site exhibition contents in non-face-to-face situations (비대면 상황에서 가상 전시와 현장 전시 콘텐츠의 관객 반응 비교)

  • Jeong, Ye-Eun;Nam, Geurin;Kwon, Koojoo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1838-1845
    • /
    • 2022
  • Due to COVID-19, the face-to-face on-site exhibitions were not held. With this opportunity, technologies such as virtual reality/augmented reality are increasing attention, and these technologies are being used in many fields. In this study, we made a virtual exhibition content which was produced using virtual reality technology. Our virtual exhibition was held in parallel with the on-site exhibition that included the same contents. Also this exhibition provided an opportunity to realistic experience using the head mounted display. In order to provide a high sense of presence, we created an open virtual space and used realistic 3D objects and textures. Although it is a virtual exhibition, the audience can experience a sense of realism like an on-site exhibition. After the exhibition was over, a survey was conducted on the audience who participated in both the on-site exhibition and virtual exhibition to analyze their responses. As a result of the survey, the similarity and immersion of the virtual exhibition were highly evaluated.

A Study on the Method of Creating Realistic Content in Audience-participating Performances using Artificial Intelligence Sentiment Analysis Technology (인공지능 감정분석 기술을 이용한 관객 참여형 공연에서의 실감형 콘텐츠 생성 방식에 관한 연구)

  • Kim, Jihee;Oh, Jinhee;Kim, Myeungjin;Lim, Yangkyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.533-542
    • /
    • 2021
  • In this study, a process of re-creating Jindo Buk Chum, one of the traditional Korean arts, into digital art using various artificial intelligence technologies was proposed. The audience's emotional data, quantified through artificial intelligence language analysis technology, intervenes in various object forms in the projection mapping performance and affects the big story without changing it. If most interactive arts express communication between the performer and the video, this performance becomes a new type of responsive performance that allows the audience to directly communicate with the work, centering on artificial intelligence emotion analysis technology. This starts with 'Chuimsae', a performance that is common only in Korean traditional art, where the audience directly or indirectly intervenes and influences the performance. Based on the emotional information contained in the performer's 'prologue', it is combined with the audience's emotional information and converted into the form of images and particles used in the performance to indirectly participate and change the performance.

Artificial Intelligence Art : A Case study on the Artwork An Evolving GAIA (대화형 인공지능 아트 작품의 제작 연구 :진화하는 신, 가이아(An Evolving GAIA)사례를 중심으로)

  • Roh, Jinah
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.5
    • /
    • pp.311-318
    • /
    • 2018
  • This paper presents the artistic background and implementation structure of a conversational artificial intelligence interactive artwork, "An Evolving GAIA". Recent artworks based on artificial intelligence technology are introduced. Development of biomimetics and artificial life technology has burred differentiation of machine and human. In this paper, artworks presenting machine-life metaphor are shown, and the distinct implementation of conversation system is emphasized in detail. The artwork recognizes and follows the movement of audience using its eyes for natural interaction. It listens questions of the audience and replies appropriate answers by text-to-speech voice, using the conversation system implemented with an Android client in the artwork and a webserver based on the question-answering dictionary. The interaction gives to the audience discussion of meaning of life in large scale and draws sympathy for the artwork itself. The paper shows the mechanical structure, the implementation of conversational system of the artwork, and reaction of the audience which can be helpful to direct and make future artificial intelligence interactive artworks.