• Title/Summary/Keyword: 3D images

Search Result 3,550, Processing Time 0.032 seconds

Study on 2D Sprite *3.Generation Using the Impersonator Network

  • Yongjun Choi;Beomjoo Seo;Shinjin Kang;Jongin Choi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1794-1806
    • /
    • 2023
  • This study presents a method for capturing photographs of users as input and converting them into 2D character animation sprites using a generative adversarial network-based artificial intelligence network. Traditionally, 2D character animations have been created by manually creating an entire sequence of sprite images, which incurs high development costs. To address this issue, this study proposes a technique that combines motion videos and sample 2D images. In the 2D sprite generation process that uses the proposed technique, a sequence of images is extracted from real-life images captured by the user, and these are combined with character images from within the game. Our research aims to leverage cutting-edge deep learning-based image manipulation techniques, such as the GAN-based motion transfer network (impersonator) and background noise removal (U2 -Net), to generate a sequence of animation sprites from a single image. The proposed technique enables the creation of diverse animations and motions just one image. By utilizing these advancements, we focus on enhancing productivity in the game and animation industry through improved efficiency and streamlined production processes. By employing state-of-the-art techniques, our research enables the generation of 2D sprite images with various motions, offering significant potential for boosting productivity and creativity in the industry.

2D-3D Conversion Method Based on Scene Space Reconstruction (장면의 공간 재구성 기법을 이용한 2D-3D 변환 방법)

  • Kim, Myungha;Hong, Hyunki
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.7
    • /
    • pp.1-9
    • /
    • 2014
  • Previous 2D-3D conversion methods to generate 3D stereo images from 2D sequence consist of labor-intensive procedures in their production pipelines. This paper presents an efficient 2D-3D conversion system based on scene structure reconstruction from image sequence. The proposed system reconstructs a scene space and produces 3D stereo images with texture re-projection. Experimental results show that the proposed method can generate precise 3D contents based on scene structure information. By using the proposed reconstruction tool, the stereographer can collaborate efficiently with workers in production pipeline for 3D contents production.

Three-Dimensional Reconselction using the Dense Correspondences from Sequence Images (연속된 영상으로부터 조밀한 대응점을 이용한 3차원 재구성)

  • Seo Yung-Ho;Kim Sang-Hoon;Choi Jong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.8C
    • /
    • pp.775-782
    • /
    • 2005
  • In case of 3D reconstruction from dense data in uncalibrated sequence images, we encounter with the problem for searching many correspondences and the computational costs. In this paper, we propose a key frame selection method from uncalibrated images and the effective 3D reconstruction method using the key frames. Namely, it can be performed on smaller number of views in the image sequence. We extract correspondences from selected key frames in image sequences. From the extracted correspondences, camera calibration process will be done. We use the edge image to fed dense correspondences between selected key frames. The method we propose to find dense correspondences can be used for recovering the 3D structure of the scene more efficiently.

The Study on the Implementation of the X-Ray CT System Using the Cone-Beam for the 3D Dynamic Image Acquisition (3D 동영상획득을 위한 Cone-Beam 형 X-Ray CT 시스템 구현에 관한 연구)

  • Jeong, Chan-Woong;Jun, Kyu-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.4
    • /
    • pp.370-374
    • /
    • 2009
  • In this paper, we presents a new cone beam computerized tomography (CB CT) system for the reconstruction of 3 dimensional dynamic images. The system using cone beam has less the exposure of radioactivity than fan beam, relatively. In the system, the reconstruction 3-D image is reconstructed with the radiation angle of X-ray in the image processing unit and transmitted to the monitor. And in the image processing unit, the Three Pass Shear Matrices, a kind of Rotation-based method, is applied to reconstruct 3D image because it has less transcendental functions than the one-pass shear matrix to decrease a time of calculations for the reconstruction 3-D image in the processor. The new system is able to get 3~5 3-D images a second, reconstruct the 3-D dynamic images in real time.

Construction of 3D shapes of objects from reconstructed 3D points (복원된 3차원 점들로부터 3차원 객체 모양 구성)

  • Mlyahilu, John;Kim, Jongnam
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.822-824
    • /
    • 2018
  • Estimation of 3-D objects from 2-D images is inherently performed by either motion or scene features methods as it has been described in different literatures. Structure from motion as a method employed in this study uses calibrated camera and reconstructed 3-D points from the structure of the scene for reliable and precise estimates. In this study we construct 3-D shapes using color pixels and reconstructed 3-D points to determine observable differences for the constructed 3-D images. The estimation using reconstructed 3-D points indicates that the sphere is recovered by the use of scale factor due to its known size while the one obtained by using color pixels has look similar to the former but different in the scales of the axes.

Recent Technologies for the Acquisition and Processing of 3D Images Based on Deep Learning (딥러닝기반 입체 영상의 획득 및 처리 기술 동향)

  • Yoon, M.S.
    • Electronics and Telecommunications Trends
    • /
    • v.35 no.5
    • /
    • pp.112-122
    • /
    • 2020
  • In 3D computer graphics, a depth map is an image that provides information related to the distance from the viewpoint to the subject's surface. Stereo sensors, depth cameras, and imaging systems using an active illumination system and a time-resolved detector can perform accurate depth measurements with their own light sources. The 3D image information obtained through the depth map is useful in 3D modeling, autonomous vehicle navigation, object recognition and remote gesture detection, resolution-enhanced medical images, aviation and defense technology, and robotics. In addition, the depth map information is important data used for extracting and restoring multi-view images, and extracting phase information required for digital hologram synthesis. This study is oriented toward a recent research trend in deep learning-based 3D data analysis methods and depth map information extraction technology using a convolutional neural network. Further, the study focuses on 3D image processing technology related to digital hologram and multi-view image extraction/reconstruction, which are becoming more popular as the computing power of hardware rapidly increases.

Generation of Multi-view Images Using Depth Map Decomposition and Edge Smoothing (깊이맵의 정보 분해와 경계 평탄 필터링을 이용한 다시점 영상 생성 방법)

  • Kim, Sung-Yeol;Lee, Sang-Beom;Kim, Yoo-Kyung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.471-482
    • /
    • 2006
  • In this paper, we propose a new scheme to generate multi-view images utilizing depth map decomposition and adaptive edge smoothing. After carrying out smooth filtering based on an adaptive window size to regions of edges in the depth map, we decompose the smoothed depth map into four types of images: regular mesh, object boundary, feature point, and number-of-layer images. Then, we generate 3-D scenes from the decomposed images using a 3-D mesh triangulation technique. Finally, we extract multi-view images from the reconstructed 3-D scenes by changing the position of a virtual camera in the 3-D space. Experimental results show that our scheme generates multi-view images successfully by minimizing a rubber-sheet problem using edge smoothing, and renders consecutive 3-D scenes in real time through information decomposition of depth maps. In addition, the proposed scheme can be used for 3-D applications that need the depth information, such as depth keying, since we can preserve the depth data unlike the previous unsymmetric filtering method.

Three-Dimensional Map System Using Integral Imaging Technique (집적 영상 기술을 이용한 3차원 지도 시스템)

  • Cho, Myungjin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.11
    • /
    • pp.2799-2804
    • /
    • 2014
  • In this paper, we suggest three-dimensional information extraction map system using integral imaging technique. Integral imaging can record multiple elemental images with different perspectives using a 2D image acquisition device with lenslet array. Using these images, integral imaging can obtain 3D information and display 3D image. In this paper, the position difference between elemental images can be obtained using summation of absolute difference (SAD), and then 3D information can be extracted. Therefore, this technique can find the height information of 3D objects.

Visualization and Localization of Fusion Image Using VRML for Three-dimensional Modeling of Epileptic Seizure Focus (VRML을 이용한 융합 영상에서 간질환자 발작 진원지의 3차원적 가시화와 위치 측정 구현)

  • 이상호;김동현;유선국;정해조;윤미진;손혜경;강원석;이종두;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.1
    • /
    • pp.34-42
    • /
    • 2003
  • In medical imaging, three-dimensional (3D) display using Virtual Reality Modeling Language (VRML) as a portable file format can give intuitive information more efficiently on the World Wide Web (WWW). The web-based 3D visualization of functional images combined with anatomical images has not studied much in systematic ways. The goal of this study was to achieve a simultaneous observation of 3D anatomic and functional models with planar images on the WWW, providing their locational information in 3D space with a measuring implement using VRML. MRI and ictal-interictal SPECT images were obtained from one epileptic patient. Subtraction ictal SPECT co-registered to MRI (SISCOM) was performed to improve identification of a seizure focus. SISCOM image volumes were held by thresholds above one standard deviation (1-SD) and two standard deviations (2-SD). SISCOM foci and boundaries of gray matter, white matter, and cerebrospinal fluid (CSF) in the MRI volume were segmented and rendered to VRML polygonal surfaces by marching cube algorithm. Line profiles of x and y-axis that represent real lengths on an image were acquired and their maximum lengths were the same as 211.67 mm. The real size vs. the rendered VRML surface size was approximately the ratio of 1 to 605.9. A VRML measuring tool was made and merged with previous VRML surfaces. User interface tools were embedded with Java Script routines to display MRI planar images as cross sections of 3D surface models and to set transparencies of 3D surface models. When transparencies of 3D surface models were properly controlled, a fused display of the brain geometry with 3D distributions of focal activated regions provided intuitively spatial correlations among three 3D surface models. The epileptic seizure focus was in the right temporal lobe of the brain. The real position of the seizure focus could be verified by the VRML measuring tool and the anatomy corresponding to the seizure focus could be confirmed by MRI planar images crossing 3D surface models. The VRML application developed in this study may have several advantages. Firstly, 3D fused display and control of anatomic and functional image were achieved on the m. Secondly, the vector analysis of a 3D surface model was defined by the VRML measuring tool based on the real size. Finally, the anatomy corresponding to the seizure focus was intuitively detected by correlations with MRI images. Our web based visualization of 3-D fusion image and its localization will be a help to online research and education in diagnostic radiology, therapeutic radiology, and surgery applications.

  • PDF

3D Model Retrieval Based on Orthogonal Projections

  • Wei, Liu;Yuanjun, He
    • International Journal of CAD/CAM
    • /
    • v.6 no.1
    • /
    • pp.117-123
    • /
    • 2006
  • Recently with the development of 3D modeling and digitizing tools, more and more models have been created, which leads to the necessity of the technique of 3D mode retrieval system. In this paper we investigate a new method for 3D model retrieval based on orthogonal projections. We assume that 3D models are composed of trigonal meshes. Algorithms process first by a normalization step in which the 3D models are transformed into the canonical coordinates. Then each model is orthogonally projected onto six surfaces of the projected cube which contains it. A following step is feature extraction of the projected images which is done by Moment Invariants and Polar Radius Fourier Transform. The feature vector of each 3D model is composed of the features extracted from projected images with different weights. Our System validates that this means can distinguish 3D models effectively. Experiments show that our method performs quit well.