• Title/Summary/Keyword: 2D map

Search Result 1,087, Processing Time 0.026 seconds

Continuous Perspective Query Processing for 3D Objects on Road Networks (도로네트워크 기반의 3차원 객체를 위한 연속원근질의처리)

  • Kim, Joon-Seok;Li, Ki-Joune;Jang, Byung-Tae;You, Jae-Joon
    • Spatial Information Research
    • /
    • v.15 no.2
    • /
    • pp.95-109
    • /
    • 2007
  • Recently people have been offered location based services on road networks. The navigation system, one of applications, serves to find the nearest gas station or guide divers to the shortest path based 2D map. However 3D map is more important media than 2D map to make sense friendly for the real. Although 3D map's data size is huge, portable devices' storage space is small. In this paper, we define continuous perspective queries to support 3D map to mobile user on road networks and propose this queries processing method.

  • PDF

A Study on 2D/3D Map Navigation System Based on Virtual Reality (VR 기반 2D/3D Map Navigation 시스템에 관한 연구)

  • Kwon Oh-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.5
    • /
    • pp.928-933
    • /
    • 2006
  • This paper aims to build a 2D/3D map navigation that the user can efficiently operate in terms of feeding attribute information after securing 2D/3D space data. This system provides 2D/3D screen navigation that supports variety of operational visual effects and displays the location that the user is retrieving. Also it presents picture information of buildings with higher resolution and URL, name of store, phone number, other related information. Effectiveness of this system is as follows: first, development and distribution of a new technology of 2D/3D spatial database that changes the previous 2D/3D based system concept to the 2D/3D based one. Second, increase of developmental productivity by utilizing the integrated 2D/3D spatial database for developing various interfaces. Finally, it provides security of the preemptive technological position with world class domestic 2D/3D spatial database technologies.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

Motion Planning for Legged Robots Using Locomotion Primitives in the 3D Workspace (3차원 작업공간에서 보행 프리미티브를 이용한 다리형 로봇의 운동 계획)

  • Kim, Yong-Tae;Kim, Han-Jung
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.3
    • /
    • pp.275-281
    • /
    • 2007
  • This paper presents a motion planning strategy for legged robots using locomotion primitives in the complex 3D environments. First, we define configuration, motion primitives and locomotion primitives for legged robots. A hierarchical motion planning method based on a combination of 2.5 dimensional maps of the 3D workspace is proposed. A global navigation map is obtained using 2.5 dimensional maps such as an obstacle height map, a passage map, and a gradient map of obstacles to distinguish obstacles. A high-level path planner finds a global path from a 2D navigation map. A mid-level planner creates sub-goals that help the legged robot efficiently cope with various obstacles using only a small set of locomotion primitives that are useful for stable navigation of the robot. A local obstacle map that describes the edge or border of the obstacles is used to find the sub-goals along the global path. A low-level planner searches for a feasible sequence of locomotion primitives between sub-goals. We use heuristic algorithm in local motion planner. The proposed planning method is verified by both locomotion and soccer experiments on a small biped robot in a cluttered environment. Experiment results show an improvement in motion stability.

  • PDF

Effects of Depth Map Quantization for Computer-Generated Multiview Images using Depth Image-Based Rendering

  • Kim, Min-Young;Cho, Yong-Joo;Choo, Hyon-Gon;Kim, Jin-Woong;Park, Kyoung-Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2175-2190
    • /
    • 2011
  • This paper presents the effects of depth map quantization for multiview intermediate image generation using depth image-based rendering (DIBR). DIBR synthesizes multiple virtual views of a 3D scene from a 2D image and its associated depth map. However, it needs precise depth information in order to generate reliable and accurate intermediate view images for use in multiview 3D display systems. Previous work has extensively studied the pre-processing of the depth map, but little is known about depth map quantization. In this paper, we conduct an experiment to estimate the depth map quantization that affords acceptable image quality to generate DIBR-based multiview intermediate images. The experiment uses computer-generated 3D scenes, in which the multiview images captured directly from the scene are compared to the multiview intermediate images constructed by DIBR with a number of quantized depth maps. The results showed that there was no significant effect on depth map quantization from 16-bit to 7-bit (and more specifically 96-scale) on DIBR. Hence, a depth map above 7-bit is needed to maintain sufficient image quality for a DIBR-based multiview 3D system.

A New Copyright Protection Scheme for Depth Map in 3D Video

  • Li, Zhaotian;Zhu, Yuesheng;Luo, Guibo;Guo, Biao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.7
    • /
    • pp.3558-3577
    • /
    • 2017
  • In 2D-to-3D video conversion process, the virtual left and right view can be generated from 2D video and its corresponding depth map by depth image based rendering (DIBR). The depth map plays an important role in conversion system, so the copyright protection for depth map is necessary. However, the provided virtual views may be distributed illegally and the depth map does not directly expose to viewers. In previous works, the copyright information embedded into the depth map cannot be extracted from virtual views after the DIBR process. In this paper, a new copyright protection scheme for the depth map is proposed, in which the copyright information can be detected from the virtual views even without the depth map. The experimental results have shown that the proposed method has a good robustness against JPEG attacks, filtering and noise.

Spatio-temporal Data Model for 2D Map and It's Implementation Method (2차원 지도용 시계열 공간 데이터 모델과 구축방법)

  • Hwang, Jin Sang;Kim, Jae Koo;Yun, Hong Sik
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.23 no.2
    • /
    • pp.105-111
    • /
    • 2015
  • Domestic 2D maps includes only most up-to-date information at the time of production without historical information. Therefore, it is hard to identify the change history of real world objects. In this research, Spatio-temporal model for 2D map were developed and it's compatibility was verified through the pilot project conducted on the Gwanggyo area of Gyeonggi province. Also, the procedure to generate 2D spatio-temporal database using maps made periodically on the same target area was introduced for showing the possibility of realizing nation wide spatio-temporal 2D map using the national base map updated periodically.

Development of 3D Point Cloud Mapping System Using 2D LiDAR and Commercial Visual-inertial Odometry Sensor (2차원 라이다와 상업용 영상-관성 기반 주행 거리 기록계를 이용한 3차원 점 구름 지도 작성 시스템 개발)

  • Moon, Jongsik;Lee, Byung-Yoon
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.3
    • /
    • pp.107-111
    • /
    • 2021
  • A 3D point cloud map is an essential elements in various fields, including precise autonomous navigation system. However, generating a 3D point cloud map using a single sensor has limitations due to the price of expensive sensor. In order to solve this problem, we propose a precise 3D mapping system using low-cost sensor fusion. Generating a point cloud map requires the process of estimating the current position and attitude, and describing the surrounding environment. In this paper, we utilized a commercial visual-inertial odometry sensor to estimate the current position and attitude states. Based on the state value, the 2D LiDAR measurement values describe the surrounding environment to create a point cloud map. To analyze the performance of the proposed algorithm, we compared the performance of the proposed algorithm and the 3D LiDAR-based SLAM (simultaneous localization and mapping) algorithm. As a result, it was confirmed that a precise 3D point cloud map can be generated with the low-cost sensor fusion system proposed in this paper.

A Study on Terrain Construction of Unmanned Aerial Vehicle Simulator Based on Spatial Information (공간정보 기반의 무인비행체 시뮬레이터 지형 구축에 관한 연구)

  • Park, Sang Hyun;Hong, Gi Ho;Won, Jin Hee;Heo, Yong Seok
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1122-1131
    • /
    • 2019
  • This paper covers research on terrain construction for unmanned aerial vehicle simulators using spatial information that was distributed by public institutions. Aerial photography, DEM, vector maps and 3D model data were used in order to create a realistic terrain simulator. A data converting method was suggested while researching, so it was generated to automatically arrange and build city models (vWorld provided) and classification methods so that realistic images could be generated by 3D objects. For example: rivers, forests, roads, fields and so on, were arranged by aerial photographs, vector map (land cover map) and terrain construction based on the tile map used by DEM. In order to verify the terrain data of unmanned aircraft simulators produced by the proposed method, the location accuracy was verified by mounting onto Unreal Engine and checked location accuracy.

3-DTIP: 3-D Stereoscopic Tour-Into-Picture Based on Depth Map (3-DTIP: 깊이 데이터 기반 3차원 입체 TIP)

  • Jo, Cheol-Yong;Kim, Je-Dong;Jeong, Da-Un;Gil, Jong-In;Lee, Kwang-Hoon;Kim, Man-Bae
    • Proceedings of the IEEK Conference
    • /
    • 2009.05a
    • /
    • pp.28-30
    • /
    • 2009
  • This paper describes a 3-DTIP(3-D Tour Into Picture) using depth map for a Korean classical painting being composed of persons and landscape. Unlike conventional TIP methods providing 2-D image or video, our proposed TIP can provide users with 3-D stereoscopic contents. Navigating inside a picture provides more realistic and immersive perception. The method firstly makes depth map. Input data consists of foreground object, background image, depth map, foreground mask. Firstly we separate foreground object and background, make each of their depth map. Background is decomposed into polygons and assigned depth value to each vertexes. Then a polygon is decomposed into many triangles. Gouraud shading is used to make a final depth map. Navigating into a picture uses OpenGL library. Our proposed method was tested on "Danopungjun" and "Muyigido" that are famous paintings made in Chosun Dynasty. The stereoscopic video was proved to deliver new 3-D perception better than 2-D video.

  • PDF