• Title/Summary/Keyword: Virtual methods

Search Result 1,426, Processing Time 0.03 seconds

Realtime Video Visualization based on 3D GIS (3차원 GIS 기반 실시간 비디오 시각화 기술)

  • Yoon, Chang-Rak;Kim, Hak-Cheol;Kim, Kyung-Ok;Hwang, Chi-Jung
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.1
    • /
    • pp.63-70
    • /
    • 2009
  • 3D GIS(Geographic Information System) processes, analyzes and presents various real-world 3D phenomena by building 3D spatial information of real-world terrain, facilities, etc., and working with visualization technique such as VR(Virtual Reality). It can be applied to such areas as urban management system, traffic information system, environment management system, disaster management system, ocean management system, etc,. In this paper, we propose video visualization technology based on 3D geographic information to provide effectively real-time information in 3D geographic information system and also present methods for establishing 3D building information data. The proposed video visualization system can provide real-time video information based on 3D geographic information by projecting real-time video stream from network video camera onto 3D geographic objects and applying texture-mapping of video frames onto terrain, facilities, etc.. In this paper, we developed sem i-automatic DBM(Digital Building Model) building technique using both aerial im age and LiDAR data for 3D Projective Texture Mapping. 3D geographic information system currently provide static visualization information and the proposed method can replace previous static visualization information with real video information. The proposed method can be used in location-based decision-making system by providing real-time visualization information, and moreover, it can be used to provide intelligent context-aware service based on geographic information.

  • PDF

Study on the Emotional Response of VR Contents Based on Photorealism: Focusing on 360 Product Image (실사 기반 VR 콘텐츠의 감성 반응 연구: 360 제품 이미지를 중심으로)

  • Sim, Hyun-Jun;Noh, Yeon-Sook
    • Science of Emotion and Sensibility
    • /
    • v.23 no.2
    • /
    • pp.75-88
    • /
    • 2020
  • Given the development of information technology, various methods for efficient information delivery have been constructed as the method of delivering product information moves from offline and 2D to online and 3D. These attempts not only are about delivering product information in an online space where no real product exists but also play a crucial role in diversifying and revitalizing online shopping by providing virtual experiences to consumers. 360 product image is a photorealistic VR that allows a subject to be rotated and photographed to view objects in three dimensions. 360 product image has also attracted considerable attention considering that it can deliver richer information about an object compared with the existing still image photography. 360 product image is influenced by divergent production factors, and accordingly, a difference emerges in the responses of users. However, as the history of technology is short, related research is also insufficient. Therefore, this study aimed to grasp the responses of users, which vary depending on the type of products and the number of source images in the 360 product image process. To this end, a representative product among the product groups that can be frequently found in online shopping malls was selected to produce a 360 product image and experiment with 75 users. The emotional responses to the 360 product image were analyzed through an experimental questionnaire to which the semantic classification method was applied. The results of this study could be used as basic data to understand and grasp the sensitivity of consumers to 360 product image.

Design of Vision-based Interaction Tool for 3D Interaction in Desktop Environment (데스크탑 환경에서의 3차원 상호작용을 위한 비전기반 인터랙션 도구의 설계)

  • Choi, Yoo-Joo;Rhee, Seon-Min;You, Hyo-Sun;Roh, Young-Sub
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.421-434
    • /
    • 2008
  • As computer graphics, virtual reality and augmented reality technologies have been developed, in many application areas based on those techniques, interaction for 3D space is required such as selection and manipulation of an 3D object. In this paper, we propose a framework for a vision-based 3D interaction which enables to simulate functions of an expensive 3D mouse for a desktop environment. The proposed framework includes a specially manufactured interaction device using three-color LEDs. By recognizing position and color of the LED from video sequences, various events of the mouse and 6 DOF interactions are supported. Since the proposed device is more intuitive and easier than an existing 3D mouse which is expensive and requires skilled manipulation, it can be used without additional learning or training. In this paper, we explain methods for making a pointing device using three-color LEDs which is one of the components of the proposed framework, calculating 3D position and orientation of the pointer and analyzing color of the LED from video sequences. We verify accuracy and usefulness of the proposed device by showing a measurement result of an error of the 3D position and orientation.

Accuracy Analysis of Cadastral Control Points Surveying using VRS case by Jinju city parts (가상기지국을 활용한 지적기준점 관측 정확도 분석 -진주시 일원을 중심으로-)

  • Choi, Hyun;Kim, Kyu Cheol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.4
    • /
    • pp.413-422
    • /
    • 2012
  • After development of GPS in the 1960's, the United States released SA(Selective Availability) in 2000 and then the GPS has become commercialized to the present. The result of repeatedly developed GPS observation, the GPS real-time observation methods is RTK which basically always needs two base stations and has a fault of the accuracy decreasing as the distance between a mobile station and a receiver is increasing. Because of these weakness, VRS method has come out. VRS(Virtual Reference Station) generates the imaginary point near mobile station from several observatory datum of GPS, sets the accurate location of mobile station, thus shows high reliability and mobility. Now, the cadastral datum point is used with azimuth, repetition, and graphical traversing method for traverse network. The result of measurement indicates many problems because of different accomplishment interval given point, restrictions on the length of the conductor, many errors on the observations. So, this study did comparative analysis of the cadastral datum points through VRS method by Continuously Operating Reference Station. Through the above comparative analysis, The comparative result between surveyed result with repetition method through total station observed Cadastral Control Points and surveyed result with VRS-RTK has shown that average error of x-axis is -0.08m, average error of y-axis, +0.07m and average distance error is +0.11m.

Analysis of Display Fatigue induced by HMD-based Virtual Reality Bicycle (HMD 기반 가상현실 자전거의 영상피로 분석)

  • Kim, Sun-Uk;Han, Seung Jo;Koo, Kyo-Chan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.5
    • /
    • pp.692-699
    • /
    • 2017
  • The purpose of this study is to investigate the display fatigue quantitatively when operating 2D and HMD-based 3D VR bicycles. Though it is generally accepted that the display fatigue induced by 3D VR is greater than that induced by 2D VR, there have been few studies which attempted to measure the display fatigue scientifically. The subjective degree of cybersickness and quantitative flicker fusion frequency (FFF) were measured in twenty subjects (Male 10, Female 10) before and after they operated 2D and 3D VR bicycles for 5 min. Two dependent variables affected by the 2D and 3D VR displays were analyzed and compared statistically based on scientific evidence and research. This study showed that 3D VR resulted in a significantly higher cybersickness rate and a significant lower FFF rate than 2D VR. Given the current propensity to couple VR techniques with exercise equipment, it seems appropriate to verify the general beliefs through scientific methods and experimental measures such as the FFF and cybersickness questionnaires.

Determination of Stereotactic Target Position with MR Localizer (자기공명영상을 이용한 두개부내 표적의 3차원적 위치결정)

  • 최태진;김옥배;주양구;서수지;손은익
    • Progress in Medical Physics
    • /
    • v.7 no.2
    • /
    • pp.67-77
    • /
    • 1996
  • Purpose: To get a 3-D coordinates of intracranial target position was investicated in axial, sagittal and coronal magnetic resonance imaging with a preliminary experimented target localizer. Material and methods : In preliminal experiments, the localizer is made of engineering plastic to avoid the distrubance of magnetic field during the MR image scan. The MR localizer displayed the 9 points in three different axial tomogram. The bright signal of localizer was obtjained from 0.1~0.3% of paramagnetic gadolinium/DTPA solution in T1WI or T2WI. In this study, the 3-D position of virtual targets were examined from three different axial MR images and the streotactic position was compared to that of BRW stereotactic system in CT scan with same targets. Results: This study provided the actual target position could be obtained from single scan with MRI localizer which has inverse N-typed 9 bars. This experiment was accomplished with shimming test for detection of image distortion in MR image. However we have not found the image distortion in axial scan. The maximum error of target positions showed 1.0 mm in axial, 1.3 mm for sagittal and 1.7 mm for coronal image, respectivelly. The target localization in MR localizer was investicated with spherical virtual target in skull cadaver. Furthermore, the target position was confirmed with CRW stereotactic system showed a 1.3 mm in discrepancy. Summary : The intracranial target position was determined within 1.7 mm of discrepancy with designed MR localizer. We found the target position from axial image has more small discrepancy than that of sagittal and coronal image.

  • PDF

Study on the Characteristics of Media Environment of MRS (혼합현실공간(MRS)의 미디어환경 특성연구)

  • Han, Jung-Yeob;Ahn, Jin-Keun
    • The Journal of the Korea Contents Association
    • /
    • v.10 no.11
    • /
    • pp.169-179
    • /
    • 2010
  • In these days, space design is evolving to the mixed reality space where fused on-line and off-line. But, standard and measure of mixed reality space has not been suggested and there has been little research about media environment and expression method as a mixed system either. That's why here I suggest four media environments and their characteristics that act not only as a critical point in the mixed reality space based on ubiquitous technology but also as a standard for spatial discerning. 1) Real space that is a media environment only seen by human visual and tactical sense is evolving using expression methods like new materials based on digital technology and LED. 2) Augmented reality space is a media environment using information instruments is expressed with diverse 2D and 3D contents. 3) Cyber Space is a environment depends totally on media instruments is produced by perfect graphic information without any spatial and physical limitations. 4) Augmented cyber space is realized only through the displays in cyber studio and is a space where real objects and graphic information are mixed. Depending on the purpose of the experience, media environment and expressional characteristics of mixed reality space can be fused, blended, and mixed, and that can be realized to the intelligent information space where one can experience without spatial, visual, informational limitations. In the future, studies on physical characteristics of contents according to the media environment characteristics are necessary.

Population Dynamics of Mabled sole Limanda yokohamae($G{\"{U}}NTHER$) in Tokyo Bay, Japan (동경만산 문치가자미Limanda yokohamae($G{\"{U}}NTHER$)의 자원양 변동의 해석)

  • PARK Jong-Soo;SIMIZU Mako-to
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.24 no.1
    • /
    • pp.1-8
    • /
    • 1991
  • Population dynamics of Matted sole Limanda yokohamae($G{\"{U}}NTHER$) in Tokyo bay, Japan has been studied by virtual population analysis (VPA) for multi cohort and experimental fishing. Based on the biological data, the present parameters of the Limanda yekohamae stock at the Tokyo bay, Japan were estimated as follows: natural mortality coefficient(M) were 0.313 for male and 0.250 for female, terminal fishing mortality coefficient(F) were 2.190 for male, and 0.798 for female, rate of exploitation(E) was $30\%\;to\;50\%$. From the result of virtual population analysis for multi cohort, the population size were estimated from 3,5000,000 to 9,200,000 fishes, according to the result of experimental fishing, estimated stock size were 2,400,000 to 8,700,000 fishes. Stock size difference of the two methods were about two times in 1987, however, other years has been showed from 0.8 to 1.5 times. Both method has been showed same increase and decrease tendency of the c. p. u. e. and catches. From the isopleth diagram plot by Beverton and Holt's yield per recruit, the catches could be increase two times for female, 1.3 times for male than present aspects by the fishing management. And further, as reducing fishing effort, extension of mesh size and rising the length at first caputre, are reasonable in order to manage the stock at the optimum level.

  • PDF

Stereoscopic Free-viewpoint Tour-Into-Picture Generation from a Single Image (단안 영상의 입체 자유시점 Tour-Into-Picture)

  • Kim, Je-Dong;Lee, Kwang-Hoon;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2010
  • The free viewpoint video delivers an active contents where users can see the images rendered from the viewpoints chosen by them. Its applications are found in broad areas, especially museum tour, entertainment and so forth. As a new free-viewpoint application, this paper presents a stereoscopic free-viewpoint TIP (Tour Into Picture) where users can navigate the inside of a single image controlling a virtual camera and utilizing depth data. Unlike conventional TIP methods providing 2D image or video, our proposed method can provide users with 3D stereoscopic and free-viewpoint contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method uses semi-automatic processing to make foreground mask, background image, and depth map. The second step is to navigate the single picture and to obtain rendered images by perspective projection. For the free-viewpoint viewing, a virtual camera whose operations include translation, rotation, look-around, and zooming is operated. In experiments, the proposed method was tested eth 'Danopungjun' that is one of famous paintings made in Chosun Dynasty. The free-viewpoint software is developed based on MFC Visual C++ and OpenGL libraries.

Accuracy of the CT guided implant template by using an intraoral scanner according to the edentulous distance (구강스캐너를 이용하여 제작된 CT 가이드 임플란트 수술용 형판의 무치악 거리에 따른 정확도 분석)

  • Kang, Byeong-Gil;Kim, Hee-Jung;Chung, Chae-Heon
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.55 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • Purpose: The purpose of this study is to compare the accuracy of the CT guided implant template that was produced by using an intraoral scanner according to the edentulous distance. Materials and methods: Five maxillary casts were fabricated using radiopaque acrylic resin with the second premolars, first molars, and second molars missing. Then a virtual cast was acquired by scanning each resin cast. Implant treatment was planned on the missing sites by superimposing the presurgical CT DICOM file and the virtual cast. Then the implants were placed using a surgical template followed by postsurgical CT scan. The distance and angle of the platform and apex between the presurgical implant and postsurgical implant were measured using the X, Y, and Z axis of the superimposed presurgical CT and postsurgical CT via software followed by statistical analysis using Kruskall-Wallis test and Mann-Whitney test. Results: The implant placement angle error increased towards the second molars but there was no statistically significant difference. The implant placement distance error at the platform and apex also increased towards the second molars and there was a statistically significant error at the second molars. Conclusion: Although the placement angle had no statistically significant difference between the presurgical implant and postsurgical implant, the placement distance at the platform and apex showed a larger error and a statistically significant difference at the second molar implant.