• Title/Summary/Keyword: 그래픽스

Search Result 1,199, Processing Time 0.026 seconds

3D Volumetric Capture-based Dynamic Face Production for Hyper-Realistic Metahuman (극사실적 메타휴먼을 위한 3D 볼류메트릭 캡쳐 기반의 동적 페이스 제작)

  • Oh, Moon-Seok;Han, Gyu-Hoon;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.751-761
    • /
    • 2022
  • With the development of digital graphics technology, the metaverse has become a significant trend in the content market. The demand for technology that generates high-quality 3D (dimension) models is rapidly increasing. Accordingly, various technical attempts are being made to create high-quality 3D virtual humans represented by digital humans. 3D volumetric capture is spotlighted as a technology that can create a 3D manikin faster and more precisely than the existing 3D model creation method. In this study, we try to analyze 3D high-precision facial production technology based on practical cases of the difficulties in content production and technologies applied in volumetric 3D and 4D model creation. Based on the actual model implementation case through 3D volumetric capture, we considered techniques for 3D virtual human face production and producted a new metahuman using a graphics pipeline for an efficient human facial generation.

Isosurface Component Tracking and Visualization in Time-Varying Volumetric Data (시변 볼륨 데이터에서의 등위면 콤포넌트 추적 및 시각화)

  • Sohn, Bong-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.10
    • /
    • pp.225-231
    • /
    • 2009
  • This paper describes a new algorithm to compute and track the deformation of an isosurface component defined in a time-varying volumetric data. Isosurface visualization is one of the most common method for effective visualization of volumetric data. However, most isosurface visualization algorithms have been developed for static volumetric data. As imaging and simulation techniques are developed, large time-varying volumetric data are increasingly generated. Hence, development of time-varying isosurface visualization that utilizes dynamic properties of time-varying data becomes necessary. First, we define temporal correspondence between isosurface components of two consecutive timesteps. Based on the definition, we perform an algorithm that tracks the deformation of an isosurface component that can be selected using the Contour Tree. By repeating this process for entire timesteps, we can effectively visualize the time-varying data by displaying the dynamic deformation of the selected isosurface component.

Analysis of muddy water generation status using R (R을 이용한 흙탕물 발생현황 분석)

  • Park, Woon Ji;Oh, Seung Min;Lim, Kyoung Jae
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.350-350
    • /
    • 2022
  • R은 통계 및 빅데이터 분석에 널리 사용되는 오픈 소스 프로그래밍 언어로, 통계와 그래픽스에 관련된 기능을 확정할 수 있어 다양한 분야에 활용되고 있다. 특히, 수자원 분야의 연구에서 그 활용이 늘어나고 있으며, 최근 들어 다양한 수자원 관련 R 패키지가 발표되고 있다. 이중, 미국 지질조사국(U.S. Geological Survey, USGS)이 개발한 EGRET은 수질 및 유출량 자료의 장기 추세 변화 분석을 위한 패키지로 R 프로그래밍 언어를 기반으로 구동되며, 분석·처리한 데이터에 대하여 광범위한 그래픽 프리젠테이션을 제공하여 탐색적 자료 분석에 매우 효과적인 도구이다. 특히, EGRET 패키지는 농도와 유출 사이의 관계 특성, 수집된 자료의 계절성 존재 및 특성, 점진적 또는 급격한 경향의 존재를 검토할 수 있는 그래픽 결과를 제시하며, 가중 회귀(Weighted Regressions on Time, Discharge, and Season, 이하 WRTDS) 모델을 적용하여 농도와 부하의 상태와 경향을 특성화한다. 시간, 유량 및 계절에 대한 WRTDS 모델은 농도 및 부하의 상태와 경향을 특성화하는 데 사용할 수 있는 수질 데이터 세트의 분석 방법으로, 근본적으로 탐색적 데이터 분석 방법으로 다양한 유형의 트렌드 시나리오에 민감하도록 설계되었으며 선형 또는 2차 함수형에 맞지 않을 수 있는 시간적 추세를 탐지하여 설명할 수 있고, 불규칙한 간격의 자료를 사용하기에 적합한 장점이 있다. 본 연구에서는 북한강 상류의 지속적인 흙탕물 발생으로 문제가 되고 있는 자운지구의 자운천을 대상으로 흙탕물 발생 현황을 분석하기 R을 이용하여 탐색적 자료 분석을 실시하였다. 자료 분석은 EGRET 패키지를 사용하여 수집된 자료(2016년 4월 - 2021년 7월까지 수집된 191개의 SS 자료와 인근 유량측정망의 유량자료)의 유량과 SS 농도 간의 관계, 시간에 따른 SS 농도 분포, SS 농도의 월별 특성 분석 및 유황별 SS 농도 변화 등을 검토하였으며, WRTDS 모델로 SS와 부하량을 예측하고 검토하여 자운천 유역의 흙탕물 부하 특성을 검토하였다.

  • PDF

Consumer Evaluations of Convergence Products (컨버전스 제품에 대한 소비자 평가 )

  • Kim, Hae-Ryong;Hong, Shin-Myung;Lee, Moonkyu
    • Asia Marketing Journal
    • /
    • v.7 no.1
    • /
    • pp.1-20
    • /
    • 2005
  • Convergence products, which perform multiple functions, are gaining popularity in the marketplace these days. Unfortunately, however, little research to date has been conducted as to how consumers evaluate such products. The present research, based on the existing literature regarding the innovation resistance and product bundling, examines the factors influencing consumer responses to convergence products. The results of the study reveal that consumer evaluations and purchase intentions are affected by consumer perceptions of risk (emotional and functional), convenience, product complementarity and relative advantage as well as consumer individual differences measured by technographics. Implications of the results for marketers and future research are discussed.

  • PDF

The Study on the Role of 3D Animated Pre-visualization in VFX FilmProduction (VFX 영화 제작을 위한 3D animatied Pre-visualization(3D애니메이티드 사전시각화)의 역할에 관한 연구)

  • Park, Sung-Ho
    • Cartoon and Animation Studies
    • /
    • s.51
    • /
    • pp.293-319
    • /
    • 2018
  • Thanks to the advancement of the related technologies and equipment, today's video contents like movies, animations and soap operas are rapidly expanding their expressible cinematic imagination area. In order to fulfill the elevated visual expectations of audiences and realize exciting storytelling and fantastic world, the fusion of different techniques is actively used, and the reality for visual effects and image synthesis is increasing more and more. Accordingly, recent VFX-oriented movies using CG have a much more complicated production process than before. Therefore, the importance of Pre-visualization, aka Pre-vis is becoming bigger in the planning process for sophisticated design. Pre-vis means that the advance visualization for stories or directing ideas in the planning process before starting production of movies or animations. 3D animated Pre-visualization realizing directors' abstract and ambiguous ideas in 3 dimensional environment in advance is, as a powerful means for visual storytelling, briskly used focusing on the VFX film industry on which the present CG is broadly used, and the role of Pre-vis throughout productions has increased compared to the past. The studies, however, on the role and utility of Pre-vis are not enough. Therefore, this study was conducted on the role of Pre-vis used for present VFX movie productions using the examples of 3D animated Pre-visualization production in which the researcher of this study participated. In this study, the role of the Pre-vis that is subdivided presently, is divided into and 3D animatics and their each role is analyzed with the example images. Through this, the characteristics that Pre-vis should have are clarified and the concept of the advantages and utility led by the use of Pre-vis in productions is strengthened. The goal of this study is to induce active uses of Pre-vis throughout productions after forming consensus about the various roles of Pre-vis and their utility.

Helicopter Pilot Metaphor for 3D Space Navigation and its implementation using a Joystick (3차원 공간 탐색을 위한 헬리콥터 조종사 메타포어와 그 구현)

  • Kim, Young-Kyoung;Jung, Moon-Ryul;Paik, Doowon;Kim, Dong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.3 no.1
    • /
    • pp.57-67
    • /
    • 1997
  • The navigation of virtual space comes down to the manipulation of the virtual camera. The movement of the virtual cameras has 6 degrees of freedom. However, input devices such as mouses and joysticks are 2D. So, the movement of the camera that corresponds to the input device is 2D movement at the given moment. Therefore, the 3D movement of the camera can be implemented by means of the combination of 2D and 1D movements of the camera. Many of the virtual space navigation browser use several navigation modes to solve this problem. But, the criteria for distinguishing different modes are not clear, somed of the manipulations in each mode are repeated in other modes, and the kinesthetic correspondence of the input devices is often confusing. Hence the user has difficulty in making correct decisions when navigating the virtual space. To solve this problem, we use a single navigation metaphore in which different modes are organically integrated. In this paper we propose a helicopter pilot metaphor. Using the helicopter pilot metaphore means that the user navigates the virtual space like a pilot of a helicopter flying in space. In this paper, we distinguished six 2D movement spaces of the helicopter: (1) the movement on the horizontal plane, (2) the movement on the vertical plane,k (3) the pitch and yaw rotations about the current position, (4) the roll and pitch rotations about the current position, (5) the horizontal and vertical turning, and (6) the rotation about the target object. The six 3D movement spaces are visualized and displayed as a sequence of auxiliary windows. The user can select the desired movement space simply by jumping from one window to another. The user can select the desired movement by looking at the displaced 2D movement spaces. The movement of the camera in each movement space is controlled by the usual movements of the joystick.

  • PDF

A Study of 'Hear Me Later' VR Content Production to Improve the Perception of the Visually-Impaired (시각 장애인에 대한 인식 개선을 위한 'Hear me later' VR 콘텐츠 제작 연구)

  • Kang, YeWon;Cho, WonA;Hong, SeungA;Lee, KiHan;Ko, Hyeyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.99-109
    • /
    • 2020
  • This study was conducted to improve the education method for improving perception awareness of the visually-impaired. 'Hear me later' was designed and implemented based on VR content that allows the visually-impaired experience in the eyes and environment. The main target is from middle and high school students to adolescents in their twenties. It is consisted of a student, the user's daily life with waking up at home in the morning, going to school, taking classes at school, and disembarking home late in the dark. In addition, 10 quests are placed on each map to induce users' participation and activity. These quests are a daily activity for non-disabled people, but it is an activity to experience uncomfortable activity for visually impaired people. In order to verify the effect of 'Hear me later', 8 participants in their early teens to early 20s' perception of visually impaired people was measured through pre and post evaluation of VR contents experience. In order to verify the effect of'Hear me later', 8 participants in their early teens to early 20s' perception of visually impaired people was measured through pre-post evaluation of VR experiences. As a result, it was found that in the post-evaluation of VR contents experience, the perception of the visually impaired was increased by 30% compared to the pre-evaluation. In particular, misunderstandings and changes in prejudice toward the visually impaired were remarkable. Through this study, the possibility of a VR-based disability experience education program that can freely construct space-time and maximize the experience was verified. In addition, it laid the foundation to expand it to various fields of improvement of the disabled.

GPU-based dynamic point light particles rendering using 3D textures for real-time rendering (실시간 렌더링 환경에서의 3D 텍스처를 활용한 GPU 기반 동적 포인트 라이트 파티클 구현)

  • Kim, Byeong Jin;Lee, Taek Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.26 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • This study proposes a real-time rendering algorithm for lighting when each of more than 100,000 moving particles exists as a light source. Two 3D textures are used to dynamically determine the range of influence of each light, and the first 3D texture has light color and the second 3D texture has light direction information. Each frame goes through two steps. The first step is to update the particle information required for 3D texture initialization and rendering based on the Compute shader. Convert the particle position to the sampling coordinates of the 3D texture, and based on this coordinate, update the colour sum of the particle lights affecting the corresponding voxels for the first 3D texture and the sum of the directional vectors from the corresponding voxels to the particle lights for the second 3D texture. The second stage operates on a general rendering pipeline. Based on the polygon world position to be rendered first, the exact sampling coordinates of the 3D texture updated in the first step are calculated. Since the sample coordinates correspond 1:1 to the size of the 3D texture and the size of the game world, use the world coordinates of the pixel as the sampling coordinates. Lighting process is carried out based on the color of the sampled pixel and the direction vector of the light. The 3D texture corresponds 1:1 to the actual game world and assumes a minimum unit of 1m, but in areas smaller than 1m, problems such as stairs caused by resolution restrictions occur. Interpolation and super sampling are performed during texture sampling to improve these problems. Measurements of the time taken to render a frame showed that 146 ms was spent on the forward lighting pipeline, 46 ms on the defered lighting pipeline when the number of particles was 262144, and 214 ms on the forward lighting pipeline and 104 ms on the deferred lighting pipeline when the number of particle lights was 1,024766.

An Integrated VR Platform for 3D and Image based Models: A Step toward Interactivity with Photo Realism (상호작용 및 사실감을 위한 3D/IBR 기반의 통합 VR환경)

  • Yoon, Jayoung;Kim, Gerard Jounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.6 no.4
    • /
    • pp.1-7
    • /
    • 2000
  • Traditionally, three dimension model s have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity. it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined. traversed, and rendered together. In fact, as suggested by Shade et al. [1]. these different representations can be used as different LOD's for a given object. For in stance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range. and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform : designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection. handling their transition s. implementing appropriate interaction schemes. and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit. to accommodate new node types for environment maps. billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also. during interaction, regardless of the viewing distance. a 3D representation would be used, if it exists. Finally. we carried out experiments to verify the theoretical derivation of the switching rule and obtained positive results.

  • PDF

Water droplet generation technique for 3D water drop sculptures (3차원 물방울 조각 생성장치의 구현을 위한 물방울 생성기법)

  • Lin, Long-Chun;Park, Yeon-yong;Jung, Moon Ryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.143-152
    • /
    • 2019
  • This paper presents two new techniques for solving the two problems of the water curtain: 'shape distortion' caused by gravity and 'resolution degradation' caused by fine satellite droplets around the shape. In the first method, when the user converts a three-dimensional model to a vertical sequence of slices, the slices are evenly spaced. The method is to adjust the time points at which the equi-distance slices are created by the nozzle array. In this method, even if the velocity of a water drop increases with time by gravity, the water drop slices maintain the equal interval at the moment of forming the whole shape, thereby preventing distortion. The second method is called the minimum time interval technique. The minimum time interval is the time between the open command of a nozzle and the next open command of the nozzle, so that consecutive water drops are clearly created without satellite drops. When the user converts a three-dimensional model to a sequence of slices, the slices are defined as close as possible, not evenly spaced, considering the minimum time interval of consecutive drops. The slices are arranged in short intervals in the top area of the shape, and the slices are arranged in long intervals in the bottom area of the shape. The minimum time interval is pre-determined by an experiment, and consists of the time from the open command of the nozzle to the time at which the nozzle is fully open, and the time in which the fully open state is maintained, and the time from the close command to the time at which the nozzle is fully closed. The second method produces water drop sculptures with higher resolution than does the first method.