• Title/Summary/Keyword: contents scene

Search Result 352, Processing Time 0.029 seconds

Study of Scene Directing with Cinemachine

  • Park, Sung-Suk;Kim, Jae-Ho
    • International Journal of Contents
    • /
    • v.18 no.1
    • /
    • pp.98-104
    • /
    • 2022
  • With Unity creating footage is possible by using 3D motion, 2D motion, particular, and sound. Even post-production video editing is possible by combining the footage. In particular, Cinemachine, a suite of camera tools for Unity, that greatly affects screen layout and the flow of video images, can implement most of the functions of a physical camera. Visual aesthetics can be achieved through it. However, as it is a part of a game engine. Thus, the understanding of the game engine should come first. Also doubts may arise as to how similar it is to a physical camera. Accordingly, the purpose of this study is to examine the advantages and cautions of virtual cameras in Cinemachine, and explore the potential for development by implementing storytelling directly.

A Study of the Reactive Movement Synchronization for Analysis of Group Flow (그룹 몰입도 판단을 위한 움직임 동기화 연구)

  • Ryu, Joon Mo;Park, Seung-Bo;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.79-94
    • /
    • 2013
  • Recently, the high value added business is steadily growing in the culture and art area. To generated high value from a performance, the satisfaction of audience is necessary. The flow in a critical factor for satisfaction, and it should be induced from audience and measures. To evaluate interest and emotion of audience on contents, producers or investors need a kind of index for the measurement of the flow. But it is neither easy to define the flow quantitatively, nor to collect audience's reaction immediately. The previous studies of the group flow were evaluated by the sum of the average value of each person's reaction. The flow or "good feeling" from each audience was extracted from his face, especially, the change of his (or her) expression and body movement. But it was not easy to handle the large amount of real-time data from each sensor signals. And also it was difficult to set experimental devices, in terms of economic and environmental problems. Because, all participants should have their own personal sensor to check their physical signal. Also each camera should be located in front of their head to catch their looks. Therefore we need more simple system to analyze group flow. This study provides the method for measurement of audiences flow with group synchronization at same time and place. To measure the synchronization, we made real-time processing system using the Differential Image and Group Emotion Analysis (GEA) system. Differential Image was obtained from camera and by the previous frame was subtracted from present frame. So the movement variation on audience's reaction was obtained. And then we developed a program, GEX(Group Emotion Analysis), for flow judgment model. After the measurement of the audience's reaction, the synchronization is divided as Dynamic State Synchronization and Static State Synchronization. The Dynamic State Synchronization accompanies audience's active reaction, while the Static State Synchronization means to movement of audience. The Dynamic State Synchronization can be caused by the audience's surprise action such as scary, creepy or reversal scene. And the Static State Synchronization was triggered by impressed or sad scene. Therefore we showed them several short movies containing various scenes mentioned previously. And these kind of scenes made them sad, clap, and creepy, etc. To check the movement of audience, we defined the critical point, ${\alpha}$and ${\beta}$. Dynamic State Synchronization was meaningful when the movement value was over critical point ${\beta}$, while Static State Synchronization was effective under critical point ${\alpha}$. ${\beta}$ is made by audience' clapping movement of 10 teams in stead of using average number of movement. After checking the reactive movement of audience, the percentage(%) ratio was calculated from the division of "people having reaction" by "total people". Total 37 teams were made in "2012 Seoul DMC Culture Open" and they involved the experiments. First, they followed induction to clap by staff. Second, basic scene for neutralize emotion of audience. Third, flow scene was displayed to audience. Forth, the reversal scene was introduced. And then 24 teams of them were provided with amuse and creepy scenes. And the other 10 teams were exposed with the sad scene. There were clapping and laughing action of audience on the amuse scene with shaking their head or hid with closing eyes. And also the sad or touching scene made them silent. If the results were over about 80%, the group could be judged as the synchronization and the flow were achieved. As a result, the audience showed similar reactions about similar stimulation at same time and place. Once we get an additional normalization and experiment, we can obtain find the flow factor through the synchronization on a much bigger group and this should be useful for planning contents.

Use of Long Take in The Film <The Graduate> : Focused on Mise-en-Scène (영화 <졸업>에 나타난 롱테이크의 이용 : 미장센을 중심으로)

  • Yoon, Soo-In
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.4
    • /
    • pp.143-155
    • /
    • 2012
  • The research was started out of curiosity for what made 's fast and trilling pace possible although it is an old movie. The center of what made it possible was its use of the Long Take - Long Take was used in all of the scenes and in some cases only long take was used. What is interesting is the film maker's use of various cinematic techniques to prevent the scene from being too slow and keep the audience immersed in the characters. In one shot, acting in addition to Mise-en-Sc$\grave{e}$ne were used to provide psychological immersion of character and scene. The use of Long Take, with the exception of some intentional scenes, was difficult to notice without conscience observation. All the components that make up Long Take. camera walking and lighting as well as actor's dialog and performance and scene movement all beautifully came together. The Long Take is generally replaced by many different sort shots. However, Mike Nichols clearly demonstrates the benefit of Long Take. In the movie, the general aesthetics from the use of Long Take is slightly altered for a different purpose. The specific methods and effects used in the application of Long Take is the subject of this study.

Analysis of Emotional Colors in The Mise-en-scene of The Film (영화 <로얄 테넌바움> 미장센에 나타난 감성색채 이미지 분석)

  • Shim, Hyung-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.5
    • /
    • pp.261-270
    • /
    • 2020
  • In film, color is a tool for storytelling and a metaphor for a story's theme. This study constructs efficient and objective data by analyzing color images of movies delivered to the audience. This Research the visual perception process of color in films and studies the processes accepted by the audience. Through this research process, we examine the emotional response caused by the visual stimulus of film color and quantify the visual factor through color in the film as a factor that effectively induces the emotional response of viewers who watch the movie. This study analyzes the mise-en-scene of Wes Anderson's film, Royal Tenenbaum, and studies the role of communication in cinematic colors. Quantitative analysis of color distribution data is performed using computer color analysis program on the colors displayed through 10 chapters of mise en scene. Through color analysis, it was analyzed that Anderson composed the movie scenes in red and yellow red (YR) with low saturation and medium brightness. Through this analysis, we study how color is used throughout the film and how the quantitative form of its use is to be used as the psychological factor controlling audience's emotion.

A New Approach to Naturalness for Still Images-Depending On TV Genre (TV화질에 있어서 자연스러움의 새로운 접근-TV장르)

  • Park, Yung-Kyung
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.251-258
    • /
    • 2010
  • 'Naturalness' is the important "ness" which is a key factor in image quality assessment. 'Naturalness' is a representive factor depending on the context of the image which arouses different emotions. The Image Quality Circle was split into two steps. The first step is predicting the visual perceptual attribute which are lightness, colourfulness, hue and contrast. The next step is SSE which is dependent to image contents. In this study the image contents are grouped in genres. The images were rendered using four different colour attributes which are lightness, contrast, colourfulness and hue. Using a scale, the score of image quality and SSE was asked to each participant for all rendered images. A seven-point category scale of increasing amount of "ness" is used as a quantitative adjectives sequence. The image quality model was built by combining the SSEs for each scene. The SSEs, where vividness is common, are considered as independent variables to predict the image quality score. Then the vividness model was built using colour attributes as variables to predict the vividness of each scene (genre). Vividness is an important factor of naturalness which the meaning is different for all scenes that links the naturalness and image quality. The vividness meaning was different for each scene (genre). Therefore, the colour attributes that express the vividness would depend on the image content.

  • PDF

A study on the scene directing and overacting character expressions in accordance with creating actual comedy movie into animation Focusing on TV animation Mr. Bean (희극적 실사의 애니메이션화에 따른 장면 연출과 캐릭터 과장연기 표현 연구 - TV애니메이션 Mr.bean을 중심으로)

  • Park, Sung-Won
    • Cartoon and Animation Studies
    • /
    • s.49
    • /
    • pp.143-167
    • /
    • 2017
  • The purpose of this study is to analyze the characteristics that appear when actual comedy movie is made into animation, from the singularities which arise from producing remake of existing media contents into animation. Animation and slapstick comedy movies possess a similarity in that they both evoke laughter from the viewer via exaggerated motion, expression and action. In the case of live action film, people must carry out the acting and there exists spatial limitations, whereas animation does not have such limits, which allows the comic animation to materialize spatial scene directing and acting which are different from that of live action comic films, despite the fact that they share elements of the same genre. Accordingly, this study performs comparative analysis of and which is based on the contents of the same event, from the English comedy TV program and the , the TV animation which is a remake animated version of the original series, to conduct comparative analysis the character acting and the scene directing in accordance with the live action movie and animation. The result of studying the point that makes it easy to create comic genre into animation and the advantages possessed by the media of animation, through analyzing two works that deal with the same character and the same event, were as follows. It was proven through the analysis that comedic directing with doubled composure and amusement is possible through the anticipation of exaggerated directing in the acting through expression and action, and diversity of the episodes with added imagination in the story, as well as the estrangement effect of the slapstick expression and etc.

Control Method of BIFS Contents for Mobile Devices with Restricted Input Key (제한적 키 입력을 갖는 휴대 단말에서의 BIFS 콘텐츠 제어방법)

  • Kim, Jong-Youn;Moon, Nam-Mee;Park, Joo-Kyung
    • Journal of Broadcast Engineering
    • /
    • v.15 no.3
    • /
    • pp.346-354
    • /
    • 2010
  • T-DMB is using MPEG-4 BIFS standard format for broadcasting interactive data service. BIFS enables us to represent contents as a scene which consists of various objects such as AV, image, graphic, and text. It also enables us to control objects by using user interaction. BIFS was designed to be adapted to multimedia systems with various input devices. Today, however, we are in lack of considering about mobile device with restricted input unit. The problem is that a consistent user control of interactive data contents is not possible due to the limitations of input units in T-DMB terminals. To solve the problem, we defined KeyNavigator node that provides a means to select or navigate objects (like menu) in BIFS contents by arrow keys and enter key of mobile terminal. By using KeyNavigater node, not only BIFS contents providers can make BIFS contents as they want, but also users can get a way to control BIFS contents consistently and easily.

The Mobile Cartoons Authoring Method Using Scene Flow Mode (Scene flow 방식을 이용한 모바일 만화 저작 기법)

  • Cho, Eun-Ae;Koh, Hee-Chang;Mo, Hae-Gyu
    • Cartoon and Animation Studies
    • /
    • s.19
    • /
    • pp.113-126
    • /
    • 2010
  • The digital cartoons market is looking for a new growth momentum as the radical increases of the demands and markets about the mobile contents with portable instrument popularization. The conventional digital cartoons markets which are based on web-toon, page viewer cartoons and e-paper cartoons have been studied various fields to overcome some limitations such as the traditional cartoons had. The mobile cartoons which have been changed more and more, have some canvas limitations due to the mobile screen size. These limitations lead to the communication problems between the cartoonists and the subscribers and resulting some obstacles of mobile cartoons activations. In this paper, we developed a authoring tool applied the Screen Flow method to overcome inefficiency of conventional authoring methods. This proposed method can reflect the cartoonists' during the process of authoring mobile cartoons, thereafter we studied about the authoring method of mobile cartoons and its effects. For the convenience of users creating and distributing content in a way has been studied.

  • PDF

Improved Similarity Detection Algorithm of the Video Scene (개선된 비디오 장면 유사도 검출 알고리즘)

  • Yu, Ju-Won;Kim, Jong-Weon;Choi, Jong-Uk;Bae, Kyoung-Yul
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.2
    • /
    • pp.43-50
    • /
    • 2009
  • We proposed similarity detection method of the video frame data that extracts the feature data of own video frame and creates the 1-D signal in this paper. We get the similar frame boundary and make the representative frames within the frame boundary to extract the similarity extraction between video. Representative frames make blurring frames and extract the feature data using DOG values. Finally, we convert the feature data into the 1-D signal and compare the contents similarity. The experimental results show that the proposed algorithm get over 0.9 similarity value against noise addition, rotation change, size change, frame delete, frame cutting.

A New Camera System Implementation for Realistic Media-based Contents (실감미디어 기반의 콘텐츠를 위한 카메라 시스템의 구현)

  • Seo, Young Ho;Lee, Yoon Hyuk;Koo, Ja Myung;Kim, Woo Youl;Kim, Bo Ra;Kim, Moon Seok;Kim, Dong Wook
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.2
    • /
    • pp.99-109
    • /
    • 2013
  • In this paper, we propose a new system which captures real depth and color information from natural scene and implemented it. Based on it, we produced stereo and multiview images for 3-dimensional stereoscopic contents and introduced the production of a digital hologram which is considered to the next-generation image. The system consists of both a camera system for capturing images which correspond to RGB and depth images and softwares (SWs) for various image processings which consist of pre-processing such as rectification and calibration, 3D warping, and computer generated hologram (CGH). The camera system use a vertical rig with two paris of depth and RGB camera and a specially manufactured cold mirror which has the different transmittance according to wavelength for obtaining images with the same view point. The wavelength of our mirror is about 850nm. Each algorithm was implemented using C and C++ and the implemented system can be operated in real-time.