• Title/Summary/Keyword: 프레게

Search Result 149, Processing Time 0.023 seconds

Real-Time Simulation of Single and Multiple Scattering of Light (빛의 단일 산란과 다중 산란의 실시간 시뮬레이션 기법)

  • Ki, Hyun-Woo;Lyu, Ji-Hye;Oh, Kyoung-Su
    • Journal of Korea Game Society
    • /
    • v.7 no.2
    • /
    • pp.21-32
    • /
    • 2007
  • It is significant to simulate scattering of light within media for realistic image synthesis; however, this requires costly computation. This paper introduces a practical image-space approximation technique for interactive subsurface scattering. We use a general two-pass approach, which creates transmitted irradiance samples onto shadow maps and computes illumination using the shadow maps. We estimate single scattering efficiently using a method similar to common shadow mapping with adaptive deterministic sampling. A hierarchical technique is applied to evaluate multiple scattering, based on a diffusion theory. We further accelerate rendering speed by tabulating complex functions and utilizing level of detail. We demonstrate that our technique produces high-quality images of animated scenes with blurred shadow at hundreds frames per second on graphics hardware. It can be integrated into existing interactive systems easily.

  • PDF

A Study on User Evaluation of VR Games on Improving Visual Immersion (시각적 몰입감 향상에 관한 VR 게임의 사용자 평가 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.20 no.2
    • /
    • pp.407-413
    • /
    • 2022
  • This study conducted empirical analysis through user experience and questionnaire to find out whether the technical and contextual elements of the 'COVID-19 SABER' VR game produced and developed through initial research affect the improvement of the user's visual immersion. As a result first, the hypotheses regarding the resolution, viewing angle, effect, and design quality of the technical elements were accepted, but the hypotheses regarding the frame rate and the brightness of the lighting were rejected. Next, as for the hypothesis of the contextual elements, the hypothesis about background, directing, color and texture, interest and fun was adopted, and the hypothesis about storytelling was rejected. In summary, it was concluded that in order to increase the visual immersion of VR games, technical elements resolution, viewing angle, effect, design quality, contextual elements background, directing, color and texture, interest and fun must be designed and produced. The results of this study are expected to serve as basic data for the production and development of VR games that can induce and improve user's visual immersion in the future.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

Effects of Motion Correction for Dynamic $[^{11}C]Raclopride$ Brain PET Data on the Evaluation of Endogenous Dopamine Release in Striatum (동적 $[^{11}C]Raclopride$ 뇌 PET의 움직임 보정이 선조체 내인성 도파민 유리 정량화에 미치는 영향)

  • Lee, Jae-Sung;Kim, Yu-Kyeong;Cho, Sang-Soo;Choe, Yearn-Seong;Kang, Eun-Joo;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.413-420
    • /
    • 2005
  • Purpose: Neuroreceptor PET studies require 60-120 minutes to complete and head motion of the subject during the PET scan increases the uncertainty in measured activity. In this study, we investigated the effects of the data-driven head mutton correction on the evaluation of endogenous dopamine release (DAR) in the striatum during the motor task which might have caused significant head motion artifact. Materials and Methods: $[^{11}C]raclopride$ PET scans on 4 normal volunteers acquired with bolus plus constant infusion protocol were retrospectively analyzed. Following the 50 min resting period, the participants played a video game with a monetary reward for 40 min. Dynamic frames acquired during the equilibrium condition (pre-task: 30-50 min, task: 70-90 min, post-task: 110-120 min) were realigned to the first frame in pre-task condition. Intra-condition registrations between the frames were performed, and average image for each condition was created and registered to the pre-task image (inter-condition registration). Pre-task PET image was then co-registered to own MRI of each participant and transformation parameters were reapplied to the others. Volumes of interest (VOI) for dorsal putamen (PU) and caudate (CA), ventral striatum (VS), and cerebellum were defined on the MRI. Binding potential (BP) was measured and DAR was calculated as the percent change of BP during and after the task. SPM analyses on the BP parametric images were also performed to explore the regional difference in the effects of head motion on BP and DAR estimation. Results: Changes in position and orientation of the striatum during the PET scans were observed before the head motion correction. BP values at pre-task condition were not changed significantly after the intra-condition registration. However, the BP values during and after the task and DAR were significantly changed after the correction. SPM analysis also showed that the extent and significance of the BP differences were significantly changed by the head motion correction and such changes were prominent in periphery of the striatum. Conclusion: The results suggest that misalignment of MRI-based VOI and the striatum in PET images and incorrect DAR estimation due to the head motion during the PET activation study were significant, but could be remedied by the data-driven head motion correction.

Comparative Analysis of Markerless Facial Recognition Technology for 3D Character's Facial Expression Animation -Focusing on the method of Faceware and Faceshift- (3D 캐릭터의 얼굴 표정 애니메이션 마커리스 표정 인식 기술 비교 분석 -페이스웨어와 페이스쉬프트 방식 중심으로-)

  • Kim, Hae-Yoon;Park, Dong-Joo;Lee, Tae-Gu
    • Cartoon and Animation Studies
    • /
    • s.37
    • /
    • pp.221-245
    • /
    • 2014
  • With the success of the world's first 3D computer animated film, "Toy Story" in 1995, industrial development of 3D computer animation gained considerable momentum. Consequently, various 3D animations for TV were produced; in addition, high quality 3D computer animation games became common. To save a large amount of 3D animation production time and cost, technological development has been conducted actively, in accordance with the expansion of industrial demand in this field. Further, compared with the traditional approach of producing animations through hand-drawings, the efficiency of producing 3D computer animations is infinitely greater. In this study, an experiment and a comparative analysis of markerless motion capture systems for facial expression animation has been conducted that aims to improve the efficiency of 3D computer animation production. Faceware system, which is a product of Image Metrics, provides sophisticated production tools despite the complexity of motion capture recognition and application process. Faceshift system, which is a product of same-named Faceshift, though relatively less sophisticated, provides applications for rapid real-time motion recognition. It is hoped that the results of the comparative analysis presented in this paper become baseline data for selecting the appropriate motion capture and key frame animation method for the most efficient production of facial expression animation in accordance with production time and cost, and the degree of sophistication and media in use, when creating animation.

Spray Modeling: An Augmented Reality Based Tangible 3D Modeling Interface (스프레이 모델링: 증강현실 기반의 실체적인 3차원 모델링 인터페이스 제안)

  • Jung, Hee-Kyoung;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.119-128
    • /
    • 2005
  • This paper presents an intuitive 3D modeling interlace based on a field study and prototype development. The process and tools of modeling were observed in workshops of professional design model making, day modeling, wood caning and glass crafting. The Spray Modeling interlace was developed from the observational analysis of the field study. It is a 3D modeling interface which combines particle spraying and day modeling in Virtual or Augmented Reality space. Virtual volume particles are sprayed on frames in Augmented Reality space as day modeling. It adopts a real air spay gun as a tangible interface device which provides coherent sound and air-force feedback. The prototype development and a user study showed that the interface supports new patterns of form development and expression. Control interfaces and requirements of auxiliary devices were found to be improved. This study examines the potential of the new interlace for designers working in 3D virtual and augmented reality. The new spraying interface is also expected to be used as an alternative interface in 3D computer workspace, games, education software and media art.

  • PDF

The Performance Analysis of GPU-based Cloth simulation according to the Change of Work Group Configuration (워크 그룹 구성 변화에 따른 GPU 기반 천 시뮬레이션의 성능 분석)

  • Choi, Young-Hwan;Hong, Min;Lee, Seung-Hyun;Choi, Yoo-Joo
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.29-36
    • /
    • 2017
  • In these days, 3D dynamic simulation is closely related to many industries. In the past, physically-based 3D simulation was used mainly in the car crash or construction related fields, but it also plays an important role in movies or games today. Many mathematical computations are needed to represent the 3D object realistically, but it is difficult to process a large amount of calculations for simulation of application based on CPU in real-time. Recently, with the advanced graphic hardware and improved architecture, GPU can be utilized for the general purposes of computation function as well as graphic computation. Many approaches using GPU have been applied for various research fields. In this paper, we analyze the performance variation of two cloth simulation algorithms based on GPU according to the change of execution properties of GPU shaders in oder to optimize the performance of GPU-based cloth simulation. Cloth simulation is implemented by the spring centric algorithm and node centric algorithm with GPU parallel computing using compute shader of GLSL 4.3. We compare the performance of between these algorithms according to the change of the size and dimension of work group. The experiment is repeated to 10 times during 5,000 frames for each test and experimental results are provided by averaging of FPS. The experimental result shows that the node centric algorithm is executed in higher speed than the spring centric algorithm.

A Study of design component of character that appear Mobile game (모바일게임에 나타난 캐릭터의 디자인요소에 관한 연구)

  • Choi, Tae-Jun;Ryu, Seuc-Ho;Lee, Heung-Woo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2006.05a
    • /
    • pp.204-210
    • /
    • 2006
  • Mobile phone has become one of the most essential thing in modern life. In the past, it was only means of cummunication among people. But now, the function of mobile phone is getting diversified. People listen to music, take pictures and play games with their mobile phone. According to the Research in Jan. 2005, members of Korean 3 mobile cummunication companies were 36 million. And the Research in Jan. 2006 shows that number of 2 million and 16 thousand of members are incresed in a year. (36million to 38million 16 thousands) Mobile game is the most popular fuction among teenagers. People like mobile game becase they can enjoy the mobile game anywhere with easy and simple way. So some companies are even launching dedicated phone for mobile game. And with the unification of platform, people can enjoy various kinds of mobile game now. Most of the mobile character bobies are divided into 2 parts such as 'Head parts' and 'Body parts'. If the body is divided into more than 4 parts, it is difficult to express emotions and actions of the character becase of the small screen of mobile phone. That is why most of the characters have 2 parts - divided body. I stadied how the Design Factor of Mobile Characters such as materials of characters, shape of characters and Frame of accessory's movement are used in the mobile game.

  • PDF

A Real-time Motion Object Detection based on Neighbor Foreground Pixel Propagation Algorithm (주변 전경 픽셀 전파 알고리즘 기반 실시간 이동 객체 검출)

  • Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.9-16
    • /
    • 2010
  • Moving object detection is to detect foreground object different from background scene in a new incoming image frame and is an essential ingredient process in some image processing applications such as intelligent visual surveillance, HCI, object-based video compression and etc. Most of previous object detection algorithms are still computationally heavy so that it is difficult to develop real-time multi-channel moving object detection in a workstation or even one-channel real-time moving object detection in an embedded system using them. Foreground mask correction necessary for a more precise object detection is usually accomplished using morphological operations like opening and closing. Morphological operations are not computationally cheap and moreover, they are difficult to be rendered to run simultaneously with the subsequent connected component labeling routine since they need quite different type of processing from what the connected component labeling does. In this paper, we first devise a fast and precise foreground mask correction algorithm, "Neighbor Foreground Pixel Propagation (NFPP)" which utilizes neighbor pixel checking employed in the connected component labeling. Next, we propose a novel moving object detection method based on the devised foreground mask correction algorithm, NFPP where the connected component labeling routine can be executed simultaneously with the foreground mask correction. Through experiments, it is verified that the proposed moving object detection method shows more precise object detection and more than 4 times faster processing speed for a image frame and videos in the given the experiments than the previous moving object detection method using morphological operations.

Stylized Specular Reflections Using Projective Textures based on Principal Curvature Analysis (주곡률 해석 기반의 투영 텍스처를 이용한 스타일 반사 효과)

  • Lee, Hwan-Jik;Choi, Jung-Ju
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.37-44
    • /
    • 2006
  • Specular reflections provide the visual feedback that describes the material type of an object, its local shape, and lighting environment. In photorealistic rendering, there have been a number of research available to render specular reflections effectively based on a local reflection model. In traditional cel animations and cartoons, specular reflections plays important role in representing artistic intentions for an object and its related environment reflections, so the shapes of highlights are quite stylistic. In this paper, we present a method to render and control stylized specular reflections using projective textures based on principal curvature analysis. Specifying a texture as a pattern of a highlight and projecting the texture on the specular region of a given 3D model, we can obtain a stylized representation of specular reflections. For a given polygonal model, a view point, and a light source, we first find the maximum specular intensity point, and then locate the texture projector along the line parallel to the normal vector and passing through the point. The orientation of the projector is determined by the principal directions at the point. Finally, the size of the projection frustum is determined by the principal curvatures corresponding to the principal directions. The proposed method can control the position, orientation, and size of the specular reflection efficiently by translating the projector along the principal directions, rotating the projector about the normal vector, and scaling the principal curvatures, respectively. The method is be applicable to real-time applications such as cartoon style 3D games. We implement the method by Microsoft DirectX 9.0c SDK and programmable vertex/pixel shaders on Nvidia GeForce FX 7800 graphics subsystems. According to our experimental results, we can render and control the stylized specular reflections for a 3D model of several ten thousands of triangles in real-time.

  • PDF