• Title/Summary/Keyword: 3D Virtual Space

Search Result 399, Processing Time 0.026 seconds

Effective Volume Rendering and Virtual Staining Framework for Visualizing 3D Cell Image Data (3차원 세포 영상 데이터의 효과적인 볼륨 렌더링 및 가상 염색 프레임워크)

  • Kim, Taeho;Park, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.1
    • /
    • pp.9-16
    • /
    • 2018
  • In this paper, we introduce a visualization framework for cell image data obtained from optical diffraction tomography (ODT), including a method for representing cell morphology in 3D virtual environment and a color mapping protocol. Unlike commonly known volume data sets, such as CT images of human organ or industrial machinery, that have solid structural information, the cell image data have rather vague information with much morphological variations on the boundaries. Therefore, it is difficult to come up with consistent representation of cell structure for visualization results. To obtain desired visual representation of cellular structures, we propose an interactive visualization technique for the ODT data. In visualization of 3D shape of the cell, we adopt a volume rendering technique which is generally applied to volume data visualization and improve the quality of volume rendering result by using empty space jittering method. Furthermore, we provide a layer-based independent rendering method for multiple transfer functions to represent two or more cellular structures in unified render window. In the experiment, we examined effectiveness of proposed method by visualizing various type of the cell obtained from the microscope which can capture ODT image and fluorescence image together.

Site-Suitability Analysis Using Spatial Information Analysis (공간정보 분석기법을 이용한 적지분석)

  • Han, Seung-Hee;Kim, Sung-Gil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.12
    • /
    • pp.5207-5215
    • /
    • 2010
  • Selecting proper location for complex facility with special purpose need comprehensive consideration on the condition and surrounding environment. Especially, in case of living space for human, lighting, ventilation, efficiency in land use, etc. are important elements. Diverse 3D analysis through 3D topography modeling and virtual simulation is necessary for this. Now, it can be processed with relatively inexpensive cost since high resolution satellite image essential in topography modeling is provided with domestic technology through Arirang No. 2 satellite (KOMPSAT2). In this study, several candidate sites is selected for complex planning with special purpose and analysis on proper location was performed using the 3D topography modeling and land information. For this, land analysis, land price calculation, slope analysis and aspect analysis have been carried out. As a result of arranging the evaluation index for each candidate site and attempting the quantitative evaluation, proper location could be selected efficiently and reasonably.

2D Interpolation of 3D Points using Video-based Point Cloud Compression (비디오 기반 포인트 클라우드 압축을 사용한 3차원 포인트의 2차원 보간 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.692-703
    • /
    • 2021
  • Recently, with the development of computer graphics technology, research on technology for expressing real objects as more realistic virtual graphics is being actively conducted. Point cloud is a technology that uses numerous points, including 2D spatial coordinates and color information, to represent 3D objects, and they require huge data storage and high-performance computing devices to provide various services. Video-based Point Cloud Compression (V-PCC) technology is currently being studied by the international standard organization MPEG, which is a projection based method that projects point cloud into 2D plane, and then compresses them using 2D video codecs. V-PCC technology compresses point cloud objects using 2D images such as Occupancy map, Geometry image, Attribute image, and other auxiliary information that includes the relationship between 2D plane and 3D space. When increasing the density of point cloud or expanding an object, 3D calculation is generally used, but there are limitations in that the calculation method is complicated, requires a lot of time, and it is difficult to determine the correct location of a new point. This paper proposes a method to generate additional points at more accurate locations with less computation by applying 2D interpolation to the image on which the point cloud is projected, in the V-PCC technology.

A Novel Color Conversion Method for Color Vision Deficiency using Color Segmentation (색각 이상자들을 위한 컬러 영역 분할 기반 색 변환 기법)

  • Han, Dong-Il;Park, Jin-San;Choi, Jong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.5
    • /
    • pp.37-44
    • /
    • 2011
  • This paper proposes a confusion-line separating algorithm in a CIE Lab color space using color segmentation for protanopia and deuteranopia. Images are segmented into regions by grouping adjacent pixels with similar color information using the hue components of the images. To this end, the region growing method and the seed points used in this method are the pixels that correspond to peak points in hue histograms that went through a low pass filter. In order to establish a color vision deficiency (CVD) confusion line map, we established 512 virtual boxes in an RGB 3-D space so that boxes existing on the same confusion line can be easily identified. After that, we checked if segmented regions existed on the same confusion line and then performed color adjustment in an CIE Lab color space so that all adjacent regions exist on different confusion lines in order to provide the best color identification effect to people with CVDs.

ROUTE/DASH-SRD based Point Cloud Content Region Division Transfer and Density Scalability Supporting Method (포인트 클라우드 콘텐츠의 밀도 스케일러빌리티를 지원하는 ROUTE/DASH-SRD 기반 영역 분할 전송 방법)

  • Kim, Doohwan;Park, Seonghwan;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.849-858
    • /
    • 2019
  • Recent developments in computer graphics technology and image processing technology have increased interest in point cloud technology for inputting real space and object information as three-dimensional data. In particular, point cloud technology can accurately provide spatial information, and has attracted a great deal of interest in the field of autonomous vehicles and AR (Augmented Reality)/VR (Virtual Reality). However, in order to provide users with 3D point cloud contents that require more data than conventional 2D images, various technology developments are required. In order to solve these problems, an international standardization organization, MPEG(Moving Picture Experts Group), is in the process of discussing efficient compression and transmission schemes. In this paper, we provide a region division transfer method of 3D point cloud content through extension of existing MPEG-DASH (Dynamic Adaptive Streaming over HTTP)-SRD (Spatial Relationship Description) technology, quality parameters are further defined in the signaling message so that the quality parameters can be selectively determined according to the user's request. We also design a verification platform for ROUTE (Real Time Object Delivery Over Unidirectional Transport)/DASH based heterogeneous network environment and use the results to validate the proposed technology.

Development of VR-based Crane Simulator using Training Server (트레이닝 서버를 이용한 VR 기반의 크레인 시뮬레이터 개발)

  • Wan-Jik Lee;Geon-Young Kim;Seok-Yeol Heo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.703-709
    • /
    • 2023
  • It is most desirable to train with a real crane in an environment similar to that of a port for crane operation training in charge of loading and unloading in a port, but it has time and space limitations and cost problems. In order to overcome these limitations, VR(Virtual Reality) based crane training programs and related devices are receiving a lot of attention. In this paper, we designed and implemented a VR-based harbor crane simulator operating on an HMD. The simulator developed in this paper consists of a crane simulator program that operates on the HMD, an IoT driving terminal that processes trainees' crane operation input, and a training server that stores trainees' training information. The simulator program provides VR-based crane training scenarios implemented with Unity3D, and the IoT driving terminal developed based on Arduino is composed of two controllers and transmits the user's driving operation to the HMD. In particular, the crane simulator in this paper uses a training server to create a database of environment setting values for each educator, progress and training time, and information on driving warning situations. Through the use of such a server, trainees can use the simulator in a more convenient environment and can expect improved educational effects by providing training information.

Dragging Body Parts in 3D Space to Direct Animated Characters (3차원 공간 상의 신체 부위 드래깅을 통한 캐릭터 애니메이션 제어)

  • Lee, Kang Hoon;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.2
    • /
    • pp.11-20
    • /
    • 2015
  • We present a new interactive technique for directing the motion sequences of an animated character by dragging its specific body part to a desired location in the three-dimensional virtual environment via a hand motion tracking device. The motion sequences of our character is synthesized by reordering subsequences of captured motion data based on a well-known graph representation. For each new input location, our system samples the space of possible future states by unrolling the graph into a spatial search tree, and retrieves one of the states at which the dragged body part of the character gets closer to the input location. We minimize the difference between each pair of successively retrieved states, so that the user is able to anticipate which states will be found by varying the input location, and resultantly, to quickly reach the desired states. The usefulness of our method is demonstrated through experiments with breakdance, boxing, and basketball motion data.

Influences of the User's Experienced Space Perception on the Flow at Digital Interactive Contents (디지털 상호작용 콘텐츠에서 체험적 공간감이 몰입에 미치는 영향)

  • Yun, Han-Kyung;Song, Bok-Hee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.5 no.4
    • /
    • pp.198-205
    • /
    • 2012
  • This study deals with development of an evaluating tool for flow experience and presence to evaluate interactive digital contents. The tool is able to measure the grade of flow and presence by surveying with their factors which are known to affect flow experience and presence. One of reasons for reducing flow experience and presence in 3D digital contents is that the experience in the virtual world is different from user's prerequisite learning in the real life. The recent interactive contents using physical movement of users as an input is possible to provide unsafe situation to users due to the different experience. The suggested flow measurement tool is able to evaluate presence and flow experience of an interactive 3D contents as well as flow and presence factors are possible to use as a general guideline for all stages of producing interactive 3D digital contents.

Study on the Emotional Response of VR Contents Based on Photorealism: Focusing on 360 Product Image (실사 기반 VR 콘텐츠의 감성 반응 연구: 360 제품 이미지를 중심으로)

  • Sim, Hyun-Jun;Noh, Yeon-Sook
    • Science of Emotion and Sensibility
    • /
    • v.23 no.2
    • /
    • pp.75-88
    • /
    • 2020
  • Given the development of information technology, various methods for efficient information delivery have been constructed as the method of delivering product information moves from offline and 2D to online and 3D. These attempts not only are about delivering product information in an online space where no real product exists but also play a crucial role in diversifying and revitalizing online shopping by providing virtual experiences to consumers. 360 product image is a photorealistic VR that allows a subject to be rotated and photographed to view objects in three dimensions. 360 product image has also attracted considerable attention considering that it can deliver richer information about an object compared with the existing still image photography. 360 product image is influenced by divergent production factors, and accordingly, a difference emerges in the responses of users. However, as the history of technology is short, related research is also insufficient. Therefore, this study aimed to grasp the responses of users, which vary depending on the type of products and the number of source images in the 360 product image process. To this end, a representative product among the product groups that can be frequently found in online shopping malls was selected to produce a 360 product image and experiment with 75 users. The emotional responses to the 360 product image were analyzed through an experimental questionnaire to which the semantic classification method was applied. The results of this study could be used as basic data to understand and grasp the sensitivity of consumers to 360 product image.

Generation of Multi-view Images Using Depth Map Decomposition and Edge Smoothing (깊이맵의 정보 분해와 경계 평탄 필터링을 이용한 다시점 영상 생성 방법)

  • Kim, Sung-Yeol;Lee, Sang-Beom;Kim, Yoo-Kyung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.471-482
    • /
    • 2006
  • In this paper, we propose a new scheme to generate multi-view images utilizing depth map decomposition and adaptive edge smoothing. After carrying out smooth filtering based on an adaptive window size to regions of edges in the depth map, we decompose the smoothed depth map into four types of images: regular mesh, object boundary, feature point, and number-of-layer images. Then, we generate 3-D scenes from the decomposed images using a 3-D mesh triangulation technique. Finally, we extract multi-view images from the reconstructed 3-D scenes by changing the position of a virtual camera in the 3-D space. Experimental results show that our scheme generates multi-view images successfully by minimizing a rubber-sheet problem using edge smoothing, and renders consecutive 3-D scenes in real time through information decomposition of depth maps. In addition, the proposed scheme can be used for 3-D applications that need the depth information, such as depth keying, since we can preserve the depth data unlike the previous unsymmetric filtering method.