• Title/Summary/Keyword: Virtual Camera

Search Result 477, Processing Time 0.046 seconds

Interactive Virtual Anthroscopy Using Isosurface Raycasting Based on Min-Max Map (최대-최소맵 기반 등위면 광선투사법을 이용한 대화식 가상 관절경)

  • 임석현;신병석
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.2
    • /
    • pp.103-109
    • /
    • 2004
  • A virtual arthroscopy is a simulation of optical arthroscopy that reconstructs anatomical structures from tomographic images in joint region such as a knee, a shoulder and a wrist. In this paper, we propose a virtual arthroscopy based on isosurface raycasting, which is a kind of volume rendering methods for generating 3D images within a short time. Our method exploits a spatial data structure called min-max map to produce high-quality images in near real-time. Also we devise a physically-based camera control model using potential field. So a virtual camera can fly through in articular cavity without restriction. Using the high-speed rendering method and realistic camera control model, we developed a virtual arthroscopy system.

Data-driven camera manipulation about vertical locomotion in a virtual environment (가상환경에서 수직 운동에 대한 데이터 기반 카메라 조작)

  • Seo, Seung-Won;Noh, Seong-Rae;Lee, Ro-Un;Park, Seung-Jun;Kang, Hyeong-Yeop
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.3
    • /
    • pp.13-21
    • /
    • 2022
  • In this paper, the goal is to investigate how manipulating the camera can minimize motion sickness and maximize immersion when a user moves in a virtual environment that requires vertical movement. In general, since a user uses virtual reality in a flat space, the actual movement of the user and the virtual movement are different, resulting in sensory conflict, which has the possibility of causing virtual reality motion sickness. Therefore, we propose three powerful camera manipulation techniques, implement them, and then propose which model is most appropriate through user experiments.

Virtual View-point Depth Image Synthesis System for CGH (CGH를 위한 가상시점 깊이영상 합성 시스템)

  • Kim, Taek-Beom;Ko, Min-Soo;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.7
    • /
    • pp.1477-1486
    • /
    • 2012
  • In this paper, we propose Multi-view CGH Making System using method of generation of virtual view-point depth image. We acquire reliable depth image using TOF depth camera. We extract parameters of reference-view cameras. Once the position of camera of virtual view-point is defined, select optimal reference-view cameras considering position of it and distance between it and virtual view-point camera. Setting a reference-view camera whose position is reverse of primary reference-view camera as sub reference-view, we generate depth image of virtual view-point. And we compensate occlusion boundaries of virtual view-point depth image using depth image of sub reference-view. In this step, remaining hole boundaries are compensated with minimum values of neighborhood. And then, we generate final depth image of virtual view-point. Finally, using result of depth image from these steps, we generate CGH. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

The 3D Geometric Information Acquisition Algorithm using Virtual Plane Method (가상 평면 기법을 이용한 3차원 기하 정보 획득 알고리즘)

  • Park, Sang-Bum;Lee, Chan-Ho;Oh, Jong-Kyu;Lee, Sang-Hun;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.11
    • /
    • pp.1080-1087
    • /
    • 2009
  • This paper presents an algorithm to acquire 3D geometric information using a virtual plane method. The method to measure 3D information on the plane is easy, because it's not concerning value on the z-axis. A plane can be made by arbitrary three points in the 3D space, so the algorithm is able to make a number of virtual planes from feature points on the target object. In this case, these geometric relations between the origin of each virtual plane and the origin of the target object coordinates should be expressed as known homogeneous matrices. To include this idea, the algorithm could induce simple matrix formula which is only concerning unknown geometric relation between the origin of target object and the origin of camera coordinates. Therefore, it's more fast and simple than other methods. For achieving the proposed method, a regular pin-hole camera model and a perspective projection matrix which is defined by a geometric relation between each coordinate system is used. In the final part of this paper, we demonstrate the techniques for a variety of applications, including measurements in industrial parts and known patches images.

Development of Road Safety Estimation Method using Driving Simulator and Eye Camera (차량시뮬레이터 및 아이카메라를 이용한 도로안전성 평가기법 개발)

  • Doh, Tcheol-Woong;Kim, Won-Keun
    • International Journal of Highway Engineering
    • /
    • v.7 no.4 s.26
    • /
    • pp.185-202
    • /
    • 2005
  • In this research, to get over restrictions of a field expreiment, we modeled a planning road through the 3D Virtual Reality and achieved data about dynamic response related to sector fluctuation and about driver's visual behavior on testers' driving the Driving Simulator Car with Eye Camera. We made constant efforts to reduce the non-reality and side effect of Driving Simulator on maximizing the accord between motion reproduction and virtual reality based on data Driving Simulator's graphic module achieved by dynamic analysis module. Moreover, we achieved data of driver's natural visual behavior using Eye Camera(FaceLAB) that is able to make an expriment without such attaching equipments such as a helmet and lense. In this paper, to evaluate the level of road's safety, we grasp the meaning of the fluctuation of safety that drivers feel according to change of road geometric structure with methods of Driving Simulator and Eye Camera and investigate the relationship between road geometric structure and safety level. Through this process, we suggest the method to evaluate the road making drivers comfortable and pleasant from planning schemes.

  • PDF

A Study on Implementation of Motion Graphics Virtual Camera with AR Core

  • Jung, Jin-Bum;Lee, Jae-Soo;Lee, Seung-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.85-90
    • /
    • 2022
  • In this study, to reduce the time and cost disadvantages of the traditional motion graphic production method in order to realize the movement of a virtual camera identical to that of the real camera, motion graphics virtualization using AR Core-based mobile device real-time tracking data A method for creating a camera is proposed. The proposed method is a method that simplifies the tracking operation in the video file stored after shooting, and simultaneously proceeds with shooting on an AR Core-based mobile device to determine whether or not tracking is successful in the shooting stage. As a result of the experiment, there was no difference in the motion graphic result image compared to the conventional method, but the time of 6 minutes and 10 seconds was consumed based on the 300frame image, whereas the proposed method has very high time efficiency because this step can be omitted. At a time when interest in image production using virtual augmented reality and various studies are underway, this study will be utilized in virtual camera creation and match moving.

Automatic Camera Control Based Avatar Behavior in Virtual Environment

  • Jung, Moon-Ryul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.55-62
    • /
    • 1998
  • This paper presents a method of controlling camera to present virtual space to participating users meaningfully. The users interact with each other by means of dialogue and behavior. Users behave through their avatars. So our problem comes down to controlling the camera to capture the avatars effectively depending on how they interact with each other. The problem is solved by specifying camera control rules based on cinematography developed by film producers. A formal language is designed to encode cinematography rules for virtual environments where people can participate in the story and can influence its flow. The rule has been used in a 3D chatting system we have developed.

  • PDF

Video-Based Augmented Reality without Euclidean Camera Calibration (유클리드 카메라 보정을 하지 않는 비디오 기반 증강현실)

  • Seo, Yong-Deuk
    • Journal of the Korea Computer Graphics Society
    • /
    • v.9 no.3
    • /
    • pp.15-21
    • /
    • 2003
  • An algorithm is developed for augmenting a real video with virtual graphics objects without computing Euclidean information. Real motion of the camera is obtained in affine space by a direct linear method using image matches. Then, virtual camera is provided by determining the locations of four basis points in two input images as initialization process. The four pairs of 2D location and its 3D affine coordinates provide Euclidean orthographic projection camera through the whole video sequence. Our method has the capability of generating views of objects shaded by virtual light sources, because we can make use of all the functions of the graphics library written on the basis of Euclidean geometry. Our novel formulation and experimental results with real video sequences are presented.

  • PDF

Application of Virtual Studio Technology and Digital Human Monocular Motion Capture Technology -Based on <Beast Town> as an Example-

  • YuanZi Sang;KiHong Kim;JuneSok Lee;JiChu Tang;GaoHe Zhang;ZhengRan Liu;QianRu Liu;ShiJie Sun;YuTing Wang;KaiXing Wang
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.16 no.1
    • /
    • pp.106-123
    • /
    • 2024
  • This article takes the talk show "Beast Town" as an example to introduce the overall technical solution, technical difficulties and countermeasures for the combination of cartoon virtual characters and virtual studio technology, providing reference and experience for the multi-scenario application of digital humans. Compared with the live broadcast that combines reality and reality, we have further upgraded our virtual production technology and digital human-driven technology, adopted industry-leading real-time virtual production technology and monocular camera driving technology, and launched a virtual cartoon character talk show - "Beast Town" to achieve real Perfectly combined with virtuality, it further enhances program immersion and audio-visual experience, and expands infinite boundaries for virtual manufacturing. In the talk show, motion capture shooting technology is used for final picture synthesis. The virtual scene needs to present dynamic effects, and at the same time realize the driving of the digital human and the movement with the push, pull and pan of the overall picture. This puts forward very high requirements for multi-party data synchronization, real-time driving of digital people, and synthetic picture rendering. We focus on issues such as virtual and real data docking and monocular camera motion capture effects. We combine camera outward tracking, multi-scene picture perspective, multi-machine rendering and other solutions to effectively solve picture linkage and rendering quality problems in a deeply immersive space environment. , presenting users with visual effects of linkage between digital people and live guests.