• Title/Summary/Keyword: 장면 그래프

Search Result 37, Processing Time 0.021 seconds

Effective 3D Object Selection Interface in Non-immersive Virtual Environment (비몰입형 가상환경에서 효과적인 3D객체선택 인터페이스)

  • 한덕수;임윤호;최윤철;임순범
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.5
    • /
    • pp.896-908
    • /
    • 2003
  • Interaction technique in .3D virtual environment is a decisive factor that affects the immersion and presence felt by users in virtual space. Especially, in fields that require exquisite manipulation of objects such as electronic manuals in desktop environment, interaction technique that supports effective and natural manipulation of object is in demand. In this paper, 3D scene graph can be internally divided and reconstructed to a list defending on the users selection and through moving focus among the selection candidate objects list, the user can select 3D object more accurately Also, by providing various feedbacks for each manipulation stage, more effective manipulation is possible. The proposed technique can be used as 3D user interface in areas that require exquisite object manipulation.

  • PDF

The Visual Counterpoint immanent in Production of Animated Characters' Changing Role -With Focus on the Lighting Design of 3D Animation Toystory3 Digital Colorscript - (애니메이션 캐릭터의 역할변화 연출에 내재된 시각적 대위법 - 3D 애니메이션 <토이스토리 3> 디지털 칼라스크립트의 조명디자인을 중심으로 -)

  • Park, Hyoung-Dong
    • Cartoon and Animation Studies
    • /
    • s.35
    • /
    • pp.155-180
    • /
    • 2014
  • The roles of the characters of 3D animation Toystory3, which was released in 2010 and achieved worldwide success, can be classified into typical, simple and easy-to-understand roles such as hero, villain, princess, and assistant. However, the process, in which each character's role is finally recognized by the audience, is embodied in a very colorful and exquisite manner and makes the curiosity of the audience continue effectively. The stream of the diverse role changes of the characters of Toystory3 is represented through "visual rhythm of the lighting design" and such rhythm can be confirmed most clearly in the digital colorscript stage. This researcher analyzed the characters' role changes in the work based on Propp's folktale character analysis theory, and extracted the core scenes that lead to inference, doubt, performance, reinforcement by character in order to grasp how the audience gasps major characters' role changes. The visual differences of the lighting design, which the four core scenes of each character show, were represented in graph and analyzed, and the results showed that the changes that one character has constituted rhythmical, visual contrast gradually and the rhythms of each independent character achieve visual contrast and harmony each other like the voice part of polyphony. This researcher calls this "the visual counterpoint of character's changing" and derives the conclusion that a dual visual counterpoint is hidden in the character production of the full-length animation Toystory3. Along with this, this researcher proposes the production of full-length animation that actively utilizes constructive aesthetics.

Design and Implementation of An MPEG-4 Dynamic Service Framework (MPEG-4 동적서비스 프레임워크 설계 및 구현)

  • 이광의
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.5
    • /
    • pp.488-493
    • /
    • 2002
  • MPEG-4 movies are composed of several media objects, organized in a hierarchical fashion. Those media objects are served to the clients as elementary streams. To play the movie, client players compose the elementary streams according to the meta- information called the scene graph. The meta-information streams are delivered as BIFS and OD elementary stream. Using dynamically generated BIFS and OD streams, we can provide a service something differs from traditional file services. For example, we can insert weather or stock information into the bottom of the screen while an existing movie was played in the screen. In this paper, we propose a dynamic service framework and dynamic server. Dynamic service framework is an object-oriented framework dynamically generating BIFS and OD streams based on the external DB information. Dynamic server provides a GUI for the server management and interface for registering dynamic services. In this framework, the dynamic service has the same interface as a file service. So, a dynamic service is considered as a file service by clients and other services.

  • PDF

A Study of Story Visualization Based on Variation of Characters Relationship by Time (등장인물들의 시간적 관계 변화에 기초한 스토리 가시화에 관한 연구)

  • Park, Seung-Bo;Baek, Yeong Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.3
    • /
    • pp.119-126
    • /
    • 2013
  • In this paper, we propose and describe the system to visualize the story of contents such as movies and novels. Character-net is applied as story model in order to visualize story. However, it is the form to be accumulated for total movie story, though it can depict the relationship between characters. We have developed the system that analyzes and shows the variation of Character-net and characters' tendency in order to represent story variation depending on movie progression. This system is composed by two windows that can play and analyze sequential Character-nets by time, and can analyze time variant graph of characters' degree centrality. First window has a function that supports to find important story points like the scenes that main characters appear or meet firstly. Second window supports a function that track each character's tendency or a variation of his tendency through analyzing in-degree graph and out-degree. This paper describes the proposed system and discusses additional requirements.

Development of a Solid Modeler for Web-based Collaborative CAD System (웹 기반 협동CAD시스템의 솔리드 모델러 개발)

  • 김응곤;윤보열
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.5
    • /
    • pp.747-754
    • /
    • 2002
  • We propose a Web-based collaborative CAD system which is independent from any platforms, and develop a 3D solid modeler in the system. We developed a new prototype of 3D solid modeler based on the web using Java 3D API, which could be executed without any 3D graphics software and worked collaboratively interacting with each user. The modeler can create primitive objects and get various 3D objects by using loader. The interactive control is available to manipulate-objects such as picking, translating, rotating, zooming. Users connect to this solid modeler and they can create 3D objects and modify them as they want. When this solid modeler is imported to collaborative design system, it will be proved its real worth in today's CAD system. Moreover, if we improve this solid modeler adding to the 3D graphic features such as rendering and animation, it will be able to support more detail design and effect view.

Object Tracking And Elimination Using Lod Edge Maps Generated from Modified Canny Edge Maps (수정된 캐니 에지 맵으로부터 만들어진 LOD 에지 맵을 이용한 물체 추적 및 소거)

  • Park, Ji-Hun;Jang, Yung-Dae;Lee, Dong-Hun;Lee, Jong-Kwan;Ham, Mi-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.171-182
    • /
    • 2007
  • We propose a simple method for tracking a nonparameterized subject contour in a single video stream with a moving camera and changing background. Then we present a method to eliminate the tracked contour object by replacing with the background scene we get from other frame. First we track the object using LOD (Level-of-Detail) canny edge maps, then we generate background of each image frame and replace the tracked object in a scene by a background image from other frame that is not occluded by the tracked object. Our tracking method is based on level-of-detail (LOD) modified Canny edge maps and graph-based routing operations on the LOD maps. We get more edge pixels along LOD hierarchy. Our accurate tracking is based on reducing effects from irrelevant edges by selecting the stronger edge pixels, thereby relying on the current frame edge pixel as much as possible. The first frame background scene is determined by camera motion, camera movement between two image frames, and other background scenes are computed from the previous background scenes. The computed background scenes are used to eliminate the tracked object from the scene. In order to remove the tracked object, we generate approximated background for the first frame. Background images for subsequent frames are based on the first frame background or previous frame images. This approach is based on computing camera motion. Our experimental results show that our method works nice for moderate camera movement with small object shape changes.

Development of the video-based smart utterance deep analyser (SUDA) application (동영상 기반 자동 발화 심층 분석(SUDA) 어플리케이션 개발)

  • Lee, Soo-Bok;Kwak, Hyo-Jung;Yun, Jae-Min;Shin, Dong-Chun;Sim, Hyun-Sub
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.63-72
    • /
    • 2020
  • This study aims to develop a video-based smart utterance deep analyser (SUDA) application that analyzes semiautomatically the utterances that child and mother produce during interactions over time. SUDA runs on the platform of Android, iPhones, and tablet PCs, and allows video recording and uploading to server. In this device, user modes are divided into three modes: expert mode, general mode and manager mode. In the expert mode which is useful for speech and language evaluation, the subject's utterances are analyzed semi-automatically by measuring speech and language factors such as disfluency, morpheme, syllable, word, articulation rate and response time, etc. In the general mode, the outcome of utterance analysis is provided in a graph form, and the manger mode is accessed only to the administrator controlling the entire system, such as utterance analysis and video deletion. SUDA helps to reduce clinicians' and researchers' work burden by saving time for utterance analysis. It also helps parents to receive detailed information about speech and language development of their child easily. Further, this device will contribute to building a big longitudinal data enough to explore predictors of stuttering recovery and persistence.