• Title/Summary/Keyword: interactive rendering

Search Result 100, Processing Time 0.034 seconds

Interactive Virtual Anthroscopy Using Isosurface Raycasting Based on Min-Max Map (최대-최소맵 기반 등위면 광선투사법을 이용한 대화식 가상 관절경)

  • 임석현;신병석
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.2
    • /
    • pp.103-109
    • /
    • 2004
  • A virtual arthroscopy is a simulation of optical arthroscopy that reconstructs anatomical structures from tomographic images in joint region such as a knee, a shoulder and a wrist. In this paper, we propose a virtual arthroscopy based on isosurface raycasting, which is a kind of volume rendering methods for generating 3D images within a short time. Our method exploits a spatial data structure called min-max map to produce high-quality images in near real-time. Also we devise a physically-based camera control model using potential field. So a virtual camera can fly through in articular cavity without restriction. Using the high-speed rendering method and realistic camera control model, we developed a virtual arthroscopy system.

A 3D Audio-Visual Animated Agent for Expressive Conversational Question Answering

  • Martin, J.C.;Jacquemin, C.;Pointal, L.;Katz, B.
    • 한국정보컨버전스학회:학술대회논문집
    • /
    • 2008.06a
    • /
    • pp.53-56
    • /
    • 2008
  • This paper reports on the ACQA(Animated agent for Conversational Question Answering) project conducted at LIMSI. The aim is to design an expressive animated conversational agent(ACA) for conducting research along two main lines: 1/ perceptual experiments(eg perception of expressivity and 3D movements in both audio and visual channels): 2/ design of human-computer interfaces requiring head models at different resolutions and the integration of the talking head in virtual scenes. The target application of this expressive ACA is a real-time question and answer speech based system developed at LIMSI(RITEL). The architecture of the system is based on distributed modules exchanging messages through a network protocol. The main components of the system are: RITEL a question and answer system searching raw text, which is able to produce a text(the answer) and attitudinal information; this attitudinal information is then processed for delivering expressive tags; the text is converted into phoneme, viseme, and prosodic descriptions. Audio speech is generated by the LIMSI selection-concatenation text-to-speech engine. Visual speech is using MPEG4 keypoint-based animation, and is rendered in real-time by Virtual Choreographer (VirChor), a GPU-based 3D engine. Finally, visual and audio speech is played in a 3D audio and visual scene. The project also puts a lot of effort for realistic visual and audio 3D rendering. A new model of phoneme-dependant human radiation patterns is included in the speech synthesis system, so that the ACA can move in the virtual scene with realistic 3D visual and audio rendering.

  • PDF

Interactive System for Efficient Video Cartooning (효율적인 비디오 카투닝을 위한 인터랙티브 시스템)

  • Hong, Sung-Soo;Yoon, Jong-Chul;Lee, In-Kwon
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.859-864
    • /
    • 2006
  • Mean shift 는 데이터의 특징을 잘 살려내는 None-parametric 방법으로, 특히 영상처리분야에서 많은 각광을 받아왔다. 하지만 좋은 결과를 보장하는 뛰어난 성능에도 불구하고, 높은 메모리소요와 긴 처리시간에 기인하여, 비디오처리 등의 분야에 적용하기엔 현실적인 제약점이 있다. 상기한 제약점을 극복하기 위해, 본 시스템은 비디오를 분석하여 전경과 후경으로 나눈다. 본 논문은 전경으로 분류된 부분에 대해 각 분리된 개체를구분하고, 좌표변환(coordinate shift)을 실행하여 연산을 할 비디오의 연산의 규모를 줄이는 방법론을 제시한다. 이러한 처리로 매우 많은 처리시간이 단축됨을 실험을 통해 알 수 있었다. 다음으로, 나뉘어진 전경에 3D mean shift를 적용하여 생성된 결과물에 대하여 3D cluster data structure 를 생성하고, 이를 이동하여 인터랙티브 에디팅이 가능하도록 하였다. 후경으로 나뉜 데이터는 이미지 한 장으로 축약이 되며, 2D mean shift 기반의 interactive cartooning system 을 통하여 만화화가 된다. 본 논문은 만화 특유의 단순한 톤을 표현하기 위해, 세밀한 분할이 필요한 부분과 그렇지 않은 부분을 따로 구분하여 처리하는 레이어처리방법을 제안한다. 위의 과정을 여러 실사이미지에 적용, 실험해본 결과 기존의 연구결과에 비해 매우 짧은 시간 내에 대상의 특징이 잘 나타낸 양질의 결과물이 생성되었다. 이러한 결과물은 출판, 영상편집분야 등 여러 분야에서 요긴하고 간편하게 사용될 수 있을 것으로 생각된다.

  • PDF

A Shadow Culling Algorithm for Interactive Ray Tracing (대화형 광선 추적법을 위한 그림자 컬링 알고리즘)

  • Nah, Jae-Ho;Park, Woo-Chan;Han, Tack-Don
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.179-189
    • /
    • 2009
  • We present a novel shadow culling algorithm for interactive ray tracing. Our approach exploits frame-to-frame coherence instead of preprocessing of building shadow data, so this algorithm is suitable for dynamic ray raying. In this algorithm, shadow processing results are stored to each primitive and used in the next frames. We also present a novel occlusion testing method. This method corrects potential shadow errors in our culling algorithm and requires low overhead. Experiment results show that our algorithm reduced both the traversal cost by 7-19 percent and the intersection cost by 9-24 percent.

  • PDF

A Design of A Dynamic Configurational Multimedia Spreadsheet for Effective HCI (효과적인 HCI를 위한 동적 재구성 멀티미디어 스프레드쉬트 설계)

  • Jee Sung-Hyun
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.1
    • /
    • pp.14-22
    • /
    • 2006
  • The multimedia visualizational spreadsheet environment is shown to be extremely effective in supporting the organized visualization of multi-dimensional data sets. In this paper, we designed the visualization model that consists of the configurational 2D arrangement of spreadsheet elements at run time and each spreadsheet element has a novel framestack. As the feature, it supports 3D data structure of each element on the proposed model. It enables the visualization spreadsheet 1) to effectively manage, organize, and compactly encapsulate multi-dimensional data sets, 2) to reconfigure cell-structures dynamically according to client request, and 3) to rapidly process interactive user interface. Using several experiments with scientific users, the model has been demonstrated to be a highly interactive visual browsing tool for 2D and 3D graphics and rendering in each frame.

  • PDF

Effective Volume Rendering and Virtual Staining Framework for Visualizing 3D Cell Image Data (3차원 세포 영상 데이터의 효과적인 볼륨 렌더링 및 가상 염색 프레임워크)

  • Kim, Taeho;Park, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, we introduce a visualization framework for cell image data obtained from optical diffraction tomography (ODT), including a method for representing cell morphology in 3D virtual environment and a color mapping protocol. Unlike commonly known volume data sets, such as CT images of human organ or industrial machinery, that have solid structural information, the cell image data have rather vague information with much morphological variations on the boundaries. Therefore, it is difficult to come up with consistent representation of cell structure for visualization results. To obtain desired visual representation of cellular structures, we propose an interactive visualization technique for the ODT data. In visualization of 3D shape of the cell, we adopt a volume rendering technique which is generally applied to volume data visualization and improve the quality of volume rendering result by using empty space jittering method. Furthermore, we provide a layer-based independent rendering method for multiple transfer functions to represent two or more cellular structures in unified render window. In the experiment, we examined effectiveness of proposed method by visualizing various type of the cell obtained from the microscope which can capture ODT image and fluorescence image together.

An Optimization Technique of Scene Description for Effective Transmission of Interactive T-DMB Contents (대화형 T-DMB 컨텐츠의 효율적인 전송을 위한 장면기술정보 최적화 기법)

  • Li Song-Lu;Cheong Won-Sik;Jae Yoo-Young;Cha Kyung-Ae
    • Journal of Broadcast Engineering
    • /
    • v.11 no.3 s.32
    • /
    • pp.363-378
    • /
    • 2006
  • The Digital Multimedia Broadcasting(DMB) system is developed to offer high quality audio-visual multimedia contents to the mobile environment. The system adopts MPEG-4 standard for the main video, audio and other media format. It also adopts the MPEG-4 scene description for interactive multimedia contents. The animated and interactive contents can be actualized by BIFS(Binary Format for Scene), the binary format for scene description that refers to the spatio-temporal specifications and behaviors of the individual objects. As more interactive contents are, the scene description is also needed more high bitrate. However, the bandwidth for allocating meta data such as scene description is restrictive in mobile environment. On one hand, the DMB terminal starts demultiplexing content and decodes individual media by its own decoder. After decoding each media, rendering module presents each media stream according to the scene description. Thus the BIFS stream corresponding to the scene description should be decoded and parsed in advance of presenting media data. With these reason, the transmission delay of BIFS stream causes the delay of whole audio-visual scene presentation although the audio or video streams are encoded in very low bitrate. This paper presents the effective optimization technique for adapting BIFS stream into expected MPEG-2 TS bitrate without any bandwidth waste and avoiding the transmission delay of the initial scene description for interactive DMB contents.

Development of an Interactive Virtual Reality Service based on 360 degree VR Image (360도 파노라마 영상 기반 대화형 가상현실 서비스 구축)

  • Kang, Byoung-Gil;Ryu, Seuc-Ho;Lee, Wan-Bok
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.463-470
    • /
    • 2017
  • Currently, virtual reality contents using VR images are spotlighted since they can be easily created and utilized. But because VR images are in a state of lack of interaction, there are limitations in their applications and usability.In order to overcome this problem, we propose a new method in which 360 degree panorama image and game engine are utilized to develop a high resolution of interactive VR service in real time. In particular, since the background image, which is represented by a form of panorama image, is pre-generated through a heavy rendering computation, it can be used to provide a immersive VR service with a relatively small amount of computation in run time on a low performance device. In order to show the effectiveness of our proposed method, an interactive game of a virtual zoo environment was implemented and illustrated showing that it can improve user interaction and immersion experience in a pretty good way.

A 3D Audio Broadcasting Terminal for Interactive Broadcasting Services (대화형 방송을 위한 3차원 오디오 방송단말)

  • Park Gi Yoon;Lee Taejin;Kang Kyeongok;Hong Jinwoo
    • Journal of Broadcast Engineering
    • /
    • v.10 no.1 s.26
    • /
    • pp.22-30
    • /
    • 2005
  • We implement an interactive 3D audio broadcasting terminal which synthesizes an audio scene according to the request of a user. Audio scene structure is described by the MPEG-4 AudioBIFS specifications. The user updates scene attributes and the terminal synthesizes the corresponding sound images in the 3D space. The terminal supports the MPEG-4 Audio top nodes and some visual nodes. Instead of using sensor nodes and route elements, we predefine node type-specific user interfaces to support BIFS commands for field replacement. We employ sound spatialization, directivity/shape modeling, and reverberation effects for 3D audio rendering and realistic feedback to user inputs. We also introduce a virtual concert program as an application scenario of the interactive broadcasting terminal.

Design and Implementation of Interactive Multi-view Visual Contents Authoring System (대화형 복수시점 영상콘텐츠 저작시스템 설계 및 구현)

  • Lee, In-Jae;Choi, Jin-Soo;Ki, Myung-Seok;Jeong, Se-Yoon;Moon, Kyung-Ae;Hong, Jin-Woo
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.458-470
    • /
    • 2006
  • This paper describes issues and consideration on authoring of interactive multi-view visual content based on MPEG-4. The issues include types of multi-view visual content; scene composition for rendering; functionalities for user-interaction; and multi-view visual content file format. The MPEG-4 standard, which aims to provide an object based audiovisual coding tool, has been developed to address the emerging needs from communications, interactive broadcasting as well as from mixed service models resulting from technological convergence. Due to the feature of object based coding, the use of MPEG-4 can resolve the format diversity problem of multi-view visual contents while providing high interactivity to users. Throughout this paper, we will present which issues need to be determined and how they can be realized by means of MPEG-4 Systems.