• 제목/요약/키워드: virtual images

검색결과 831건 처리시간 0.026초

Interactive Authoring Tool for Mobile Augmented Reality Content

  • Jeon, Jiyoung;Hong, Min;Yi, Manhui;Chun, Jiyoon;Kim, Ji Sim;Choi, Yoo-Joo
    • Journal of Information Processing Systems
    • /
    • 제12권4호
    • /
    • pp.612-630
    • /
    • 2016
  • As mobile augmented reality technologies are spreading these days, many users want to produce augmented reality (AR) contents what they need by themselves. To keep pace with such needs, we have developed a mobile AR contents builder (hereafter referred to as MARB) that enables the user to easily connect a natural marker and a virtual object with various interaction events that are used to manipulate the virtual object in a mobile environment so that users can simply produce an AR content using natural photos and virtual objects that they select. MARB consists of five major modules-target manger, virtual object manager, AR accessory manager, AR content manager, and AR viewer. The target manager, virtual object manager and AR accessory manager register and manage natural target markers, various virtual objects and content accessories (such as various decorating images), respectively. The AR content manger defines a connection between a target and a virtual object with enabling various interactions for the desired functions such as translation/rotation/scaling of the virtual object, playing of a music, etc. AR viewer augments various virtual objects (such as 2D images, 3D models and video clips) on the pertinent target. MARB has been developed in a mobile application (app) format in order to create AR contents simply using mobile smart devices without switching to a PC environment for authoring the content. In this paper, we present the detail organizations and applications of MARB. It is expected that MARB will enable ordinary users to produce diverse mobile AR contents for various purposes with ease and contribute to expanding the mobile AR market based on spread of a variety of AR contents.

사람 뇌의 3차원 영상과 가상해부 풀그림 만들기 (Manufacture of 3-Dimensional Image and Virtual Dissection Program of the Human Brain)

  • 정민석;이제만;박승규;김민구
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1998년도 추계학술대회
    • /
    • pp.57-59
    • /
    • 1998
  • For medical students and doctors, knowledge of the three-dimensional (3D) structure of brain is very important in diagnosis and treatment of brain diseases. Two-dimensional (2D) tools (ex: anatomy book) or traditional 3D tools (ex: plastic model) are not sufficient to understand the complex structures of the brain. However, it is not always guaranteed to dissect the brain of cadaver when it is necessary. To overcome this problem, the virtual dissection programs of the brain have been developed. However, most programs include only 2D images that do not permit free dissection and free rotation. Many programs are made of radiographs that are not as realistic as sectioned cadaver because radiographs do not reveal true color and have limited resolution. It is also necessary to make the virtual dissection programs of each race and ethnic group. We attempted to make a virtual dissection program using a 3D image of the brain from a Korean cadaver. The purpose of this study is to present an educational tool for those interested in the anatomy of the brain. The procedures to make this program were as follows. A brain extracted from a 58-years old male Korean cadaver was embedded with gelatin solution, and serially sectioned into 1.4 mm-thickness using a meat slicer. 130 sectioned specimens were inputted to the computer using a scanner ($420\times456$ resolution, true color), and the 2D images were aligned on the alignment program composed using IDL language. Outlines of the brain components (cerebrum, cerebellum, brain stem, lentiform nucleus, caudate nucleus, thalamus, optic nerve, fornix, cerebral artery, and ventricle) were manually drawn from the 2D images on the CorelDRAW program. Multimedia data, including text and voice comments, were inputted to help the user to learn about the brain components. 3D images of the brain were reconstructed through the volume-based rendering of the 2D images. Using the 3D image of the brain as the main feature, virtual dissection program was composed using IDL language. Various dissection functions, such as dissecting 3D image of the brain at free angle to show its plane, presenting multimedia data of brain components, and rotating 3D image of the whole brain or selected brain components at free angle were established. This virtual dissection program is expected to become more advanced, and to be used widely through Internet or CD-title as an educational tool for medical students and doctors.

  • PDF

실제와 합성영상의 비교에 의한 비디오 아바타의 정합 (Registration of Video Avatar by Comparing Real and Synthetic Images)

  • 박문호;고희동;변혜란
    • 한국정보과학회논문지:시스템및이론
    • /
    • 제33권8호
    • /
    • pp.477-485
    • /
    • 2006
  • 본 논문에서는 가상 환경의 참여자를 표현하기 위해 실제 참여자의 영상을 가상 환경에 실시간으로 제공하는 비디오 아바타를 사용하였다. 비디오 아바타의 사용은 참여자를 표현하는 정밀도를 높일 수 있지만 정확한 정합이 중요한 이슈가 된다. 비디오 아바타의 정합을 위해 실제 환경에서 사용되는 카메라와 가상 환경을 생성하기 위해 사용되는 가상 카메라의 특성을 동일하게 조정하였다. 조정된 실제와 가상 카메라의 유사성에 근거하여 실제와 합성 영상의 비교를 통하여 실제 환경에서 획득된 비디오 아바타가 가상 환경과 정합되도록 하였다. 비디오 아바타의 정합 과정에서는 정합의 부정확한 정도를 에너지로 표현하여 이를 최소화시키는 방법을 이용하였으며 실험을 통하여 제안된 방법이 가상 환경에서 비디오 아바타의 정합에 효과적으로 적용 가능함을 확인하였다.

상하의 의류 영상을 이용한 가상 의류 착의 시스템 (A Virtual Fitting System Using The Top and Bottom Image of Garment)

  • 최란;조창석
    • 한국멀티미디어학회논문지
    • /
    • 제15권7호
    • /
    • pp.941-950
    • /
    • 2012
  • 본 연구에서는 PC상에서 인체 3차원 데이터에 상하의 의류를 중첩 착의하는 가상 착의 시스템을 소개한다. 이를 위하여, 레이저 스캔 방식으로 얻은 인체 3차원 데이터와 의류 앞뒷면의 모습을 촬영하여 얻은 의류 디지털 데이터를 이용한다. 2차원의 앞뒷면 의류 디지털 데이터에는 의류 소재 내 질점 간의 장력이 반영되었고, 인체 데이터에의 착의 과정에는 마찰력과 중력을 적용해 주었다. 하의 착용 시에는 마찰력과 중력에 추가적으로 혁대 개념을 도입하여 흘러내리는 의류를 고정하였고, 하의를 착의한 인체데이터위에 상의를 착의하는 중첩 착의 방법을 제시하였다. 본 시스템이 지닌 장점은 복잡한 패턴을 이용하여 착의하는 다른 연구와 달리, 의류의 앞뒷면만을 이용하여 착의하면서도 현실감은 뒤지지 않는다는 것에 있다. 현재 의류전자상거래 시 의류의 앞뒷면만을 전시하여 판매하는 방법과 유사한 방식으로 온라인 판매가 이루어지나, 착의 모습을 제공할 수 없는 기존 방식과 달리 3차원의 착의 모습까지 제공하게 되어, 의류 판매의 방식을 바꾸게 할 것으로 기대한다.

딥러닝 의류 가상 합성 모델 연구: 가중치 공유 & 학습 최적화 기반 HR-VITON 기법 활용 (Virtual Fitting System Using Deep Learning Methodology: HR-VITON Based on Weight Sharing, Mixed Precison & Gradient Accumulation)

  • 이현상;오세환;하성호
    • 한국정보시스템학회지:정보시스템연구
    • /
    • 제31권4호
    • /
    • pp.145-160
    • /
    • 2022
  • Purpose The purpose of this study is to develop a virtual try-on deep learning model that can efficiently learn front and back clothes images. It is expected that the application of virtual try-on clothing service in the fashion and textile industry field will be vitalization. Design/methodology/approach The data used in this study used 232,355 clothes and product images. The image data input to the model is divided into 5 categories: original clothing image and wearer image, clothing segmentation, wearer's body Densepose heatmap, wearer's clothing-agnosting. We advanced the HR-VITON model in the way of Mixed-Precison, Gradient Accumulation, and sharing model weights. Findings As a result of this study, we demonstrated that the weight-shared MP-GA HR-VITON model can efficiently learn front and back fashion images. As a result, this proposed model quantitatively improves the quality of the generated image compared to the existing technique, and natural fitting is possible in both front and back images. SSIM was 0.8385 and 0.9204 in CP-VTON and the proposed model, LPIPS 0.2133 and 0.0642, FID 74.5421 and 11.8463, and KID 0.064 and 0.006. Using the deep learning model of this study, it is possible to naturally fit one color clothes, but when there are complex pictures and logos as shown in <Figure 6>, an unnatural pattern occurred in the generated image. If it is advanced based on the transformer, this problem may also be improved.

스테레오 이미지에 관한 연구 (A Study on Stereo Image (Stereopsis))

  • 홍석일
    • 디자인학연구
    • /
    • 제12권3호
    • /
    • pp.191-200
    • /
    • 1999
  • 본 연구는 스테레오 이미지의 역사적 고찰을 바탕으로 인간의 시각 인지시스템과 전통적인 스테레오 이미지의 원리를 연구, 분석하며, 나아가 컴퓨터 그래픽스 시각화(Visualization)에 있어 새로운 전자적인 3차원 스테레오 이미지를 연구하는데 그 목적이 있다. 컴퓨터에서 이미지를 처리하고 구현할 수 있게됨에 따라, 컴퓨터의 시각화는 매우 정교하고 사실적인 퀄리티를 갖게 되었다. 종전에는 수치로 표시되었던 과학적 연구의 결과가 이제는 단순히 2차원적인 도형을 보여주는 차원을 넘어 3차원적인 데이터의 처리가 보다 사실적인 시뮬레이션을 위해 입체적으로 보여지는 여러 기술적인 개발이 이루어져 왔다. 따라서 이 연구에서는 스테레오 이미지의 원리를 바탕으로 광학적, 전자적 스테레오 이미지를 연구, 분석하며, 나아가 컴퓨터 그래픽스의 새로운 3차원 스테레오 이미지의 표현 가능성을 연구하고자 한다.

  • PDF

스테레오 혼합 현실 영상 합성을 위한 계층적 변이 추정 (Hierarchical Disparity Estimation for Image Synthesis in Stereo Mixed Reality)

  • 김한성;최승철;손광훈
    • 방송공학회논문지
    • /
    • 제7권3호
    • /
    • pp.229-237
    • /
    • 2002
  • 본 논문에서는 혼합현실의 핵심 기술인 실사와 가상 영상의 합성을 위해 스테레오 영상의 특성을 고려하여 효율적으로 미세 변이를 추정하는 알고리듬과, 추정된 깊이 정보를 이용해 영상을 자연스럽게 합성하는 알고리듬을 제안하며, 이를 모의실험을 통해 검증한다. 제안 방법은 낮은 해상도의 영상으로부터 고해상도로 변이를 찾아가는 계층적 변이 추정 방식으로, 영역분할 양방향 화소정합을 통해 변이 추정의 수행 속도를 향상시키는 동시에 신뢰도를 높이며, 에지 정보를 참조하여 화소 단위로 미세 변이를 할당하게 된다. 이렇게 추정된 깊이 정보는 모델링된 가상 객체의 깊이 정좌와의 비교를 통해 혼합 현실 스테레오 영상으로 합성된다. 제안된 방식을 통해 매우 안정적이면서도 경계 부분이 정확한 변이 정보를 얻을 수 있었고, 3차원 영상의 합성에 효율적으로 사용될 수 있음을 실험을 통해 확인하였다.

Accuracy of virtual 3-dimensional cephalometric images constructed with 2-dimensional cephalograms using the biplanar radiography principle

  • Lee, Jae-Seo;Kim, Sang-Rok;Hwang, Hyeon-Shik;Lee, Kyungmin Clara
    • Imaging Science in Dentistry
    • /
    • 제51권4호
    • /
    • pp.407-412
    • /
    • 2021
  • Purpose: The purpose of this study was to evaluate the accuracy of virtual 3-dimensional (3D) cephalograms constructed using the principle of biplanar radiography by comparing them with cone-beam computed tomography (CBCT) images. Materials and Methods: Thirty orthodontic patients were enrolled in this study. Frontal and lateral cephalograms were obtained with the use of a head posture aligner and reconstructed into 3D cephalograms using biplanar radiography software. Thirty-four measurements representing the height, width, depth, and oblique distance were computed in 3 dimensions, and compared with the measurements from the 3D images obtained by CBCT, using the paired t-test and Bland-Altman analysis. Results: Comparison of height, width, depth, and oblique measurements showed no statistically significant differences between the measurements obtained from 3D cephalograms and those from CBCT images (P>0.05). Bland-Altman plots also showed high agreement between the 3D cephalograms and CBCT images. Conclusion: Accurate 3D cephalograms can be constructed using the principle of biplanar radiography if frontal and lateral cephalograms can be obtained with a head posture aligner. Three-dimensional cephalograms generated using biplanar radiography can replace CBCT images taken for diagnostic purposes.

Development of a Virtual Pitching System in Screen Baseball Game

  • Min, Meekyung;Kim, Kapsu
    • International journal of advanced smart convergence
    • /
    • 제7권3호
    • /
    • pp.66-72
    • /
    • 2018
  • In recent years, indoor simulated sports have become widely used, and screen baseball system has emerged that can play baseball in indoor space. In this paper, we propose a virtual pitching system that can improve the realism of screen baseball game. This virtual pitching system is characterized in that it uses a transmissive screen in the form of a pitching machine without a pitching hole and installed on the back of the screen. Therefore, unlike existing systems where pitching holes are formed on the screen, it enhances the immersion feeling of displayed images. Also, in this pitching system, the synchronization algorithm between the pitching machine and the virtual pitcher is used to form a sense of unity between the virtual pitcher and the ball according to various types of virtual pitchers, thereby enhancing the reality of baseball games.

3D 어패럴 캐드 시스템으로 제작된 가상의복의 소재물성별 실물 재현도에 관한 연구 (A Study on Expressivity of Virtual Clothing made of 3D Apparel CAD System according to the Physical Properties of Fabric)

  • 오송윤;유은주
    • 한국의류산업학회지
    • /
    • 제17권4호
    • /
    • pp.613-625
    • /
    • 2015
  • This research was conducted to provide basic data to improve expressivity required for virtual clothing to replace actual clothing. For the experiment, 6 materials were selected and 12(2 kinds of length) actual flared skirts were made. At the same time, simulations were carried out on OptiTex Runway 12.0 for 36(12 kinds of skirts $\times$ KES, FTU, KES weight/10) kinds of virtual flared skirts, which were applied with the measured property values (thickness, weight, bending, shear, friction, and stretch). Also, the study compared and analyzed the wearing images, silhouette overlapping images, and skirt length measurements of the actual and virtual skirts put on a dummy. As a result, the actual skirts showed clear distinction for each material. In contrast, virtual 1 and 2 expressed fabric 3 in the most similar way, but could not recreate the uniform, soft, and natural flare shape of the actual skirts in general. Virtual 3 formed natural flares as those of the actual skirts, and expressed fabric 1, 5, and 6 in a similar way. However, virtual 3 had too much volume and barely showed any distinction for each material. All of virtual 1, 2, and 3 expressed different flare shapes on the front and back sides of the skirt similarly to the actual skirts, and had a good visual expression for the color and texture of the materials. However, they could not effectively express the elasticity and fabric sagging in the bias direction.