• Title/Summary/Keyword: 3D mapping

Search Result 785, Processing Time 0.031 seconds

Coordinate Determination for Texture Mapping using Camera Calibration Method (카메라 보정을 이용한 텍스쳐 좌표 결정에 관한 연구)

  • Jeong K. W.;Lee Y.Y.;Ha S.;Park S.H.;Kim J. J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.397-405
    • /
    • 2004
  • Texture mapping is the process of covering 3D models with texture images in order to increase the visual realism of the models. For proper mapping the coordinates of texture images need to coincide with those of the 3D models. When projective images from the camera are used as texture images, the texture image coordinates are defined by a camera calibration method. The texture image coordinates are determined by the relation between the coordinate systems of the camera image and the 3D object. With the projective camera images, the distortion effect caused by the camera lenses should be compensated in order to get accurate texture coordinates. The distortion effect problem has been dealt with iterative methods, where the camera calibration coefficients are computed first without considering the distortion effect and then modified properly. The methods not only cause to change the position of the camera perspective line in the image plane, but also require more control points. In this paper, a new iterative method is suggested for reducing the error by fixing the principal points in the image plane. The method considers the image distortion effect independently and fixes the values of correction coefficients, with which the distortion coefficients can be computed with fewer control points. It is shown that the camera distortion effects are compensated with fewer numbers of control points than the previous methods and the projective texture mapping results in more realistic image.

Enhanced Image Mapping Method for Computer-Generated Integral Imaging System (집적 영상 시스템을 위한 향상된 이미지 매핑 방법)

  • Lee Bin-Na-Ra;Cho Yong-Joo;Park Kyoung-Shin;Min Sung-Wook
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.295-300
    • /
    • 2006
  • The integral imaging system is an auto-stereoscopic display that allows users to see 3D images without wearing special glasses. In the integral imaging system, the 3D object information is taken from several view points and stored as elemental images. Then, users can see a 3D reconstructed image by the elemental images displayed through a lens array. The elemental images can be created by computer graphics, which is referred to the computer-generated integral imaging. The process of creating the elemental images is called image mapping. There are some image mapping methods proposed in the past, such as PRR(Point Retracing Rendering), MVR(Multi-Viewpoint Rendering) and PGR(Parallel Group Rendering). However, they have problems with heavy rendering computations or performance barrier as the number of elemental lenses in the lens array increases. Thus, it is difficult to use them in real-time graphics applications, such as virtual reality or real-time, interactive games. In this paper, we propose a new image mapping method named VVR(Viewpoint Vector Rendering) that improves real-time rendering performance. This paper describes the concept of VVR first and the performance comparison of image mapping process with previous methods. Then, it discusses possible directions for the future improvements.

Stereo vision mixed reality system using the multi-blob marker (다중 블럽 마커를 이용한 스테레오 비전 혼합현실 시스템의 구현)

  • 양기선;김한성;손광훈
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1907-1910
    • /
    • 2003
  • This paper describes a method of stereo image composition for mixed reality without camera calibration or complicate tracking algorithm. The proposed system tracks the panel which has blob makers, and composes virtual objects naturally using the method of texture mapping which is often used in geological computer graphics mapping when we do mapping 2D computer graphic data or man-made 2D images. The proposed algorithm makes it possible for us to compose virtual data even in the case that the panel is bent. For composing 3D object, the system uses depth information obtained from stereo image so that we do not need cumbersome procedure of camera calibration.

  • PDF

Study on Seabed Mapping using Two Sonar Devices for AUV Application (복수의 수중 소나를 활용한 수중 로봇의 3차원 지형 맵핑에 관한 연구)

  • Joe, Hangil;Yu, Son-Cheol
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.2
    • /
    • pp.94-102
    • /
    • 2021
  • This study addresses a method for 3D reconstruction using acoustic data with heterogeneous sonar devices: Forward-Looking Multibeam Sonar (FLMS) and Profiling Sonar (PS). The challenges in sonar image processing are perceptual ambiguity, the loss of elevation information, and low signal to noise ratio, which are caused by the ranging and intensity-based image generation mechanism of sonars. The conventional approaches utilize additional constraints such as Lambertian reflection and redundant data at various positions, but they are vulnerable to environmental conditions. Our approach is to use two sonars that have a complementary data type. Typically, the sonars provide reliable information in the horizontal but, the loss of elevation information degrades the quality of data in the vertical. To overcome the characteristic of sonar devices, we adopt the crossed installation in such a way that the PS is laid down on its side and mounted on the top of FLMS. From the installation, FLMS scans horizontal information and PS obtains a vertical profile of the front area of AUV. For the fusion of the two sonar data, we propose the probabilistic approach. A likelihood map using geometric constraints between two sonar devices is built and a monte-carlo experiment using a derived model is conducted to extract 3D points. To verify the proposed method, we conducted a simulation and field test. As a result, a consistent seabed map was obtained. This method can be utilized for 3D seabed mapping with an AUV.

Drawing of Sea Mapping using Sound Detector (음향탐지장비를 활용한 해저지형도작성에 관한 연구)

  • Lee, Jae-Gi;Kim, Myoung-Bae;Kim, Kam-Lae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.6_2
    • /
    • pp.625-633
    • /
    • 2007
  • Recently human beings are enforcing marine investigations to extend their living environment from land to the sea. Therefore, this study grasped objects in the bottom of the sea and its topographical undulation and acquired topographic map with a sound detector. In result, This study acquired their images with a sound detector and can draw up a Drawing of Sea Mapping and a three-dimensional modeling map.

A Study on 3D Face Modelling based on Dynamic Muscle Model for Face Animation (얼굴 애니메이션을 위한 동적인 근육모델에 기반한 3차원 얼굴 모델링에 관한 연구)

  • 김형균;오무송
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.322-327
    • /
    • 2003
  • Based on dynamic muscle model to construct efficient face animation in this paper 30 face modelling techniques propose. Composed face muscle by faceline that connect 256 point and this point based on dynamic muscle model, and constructed wireframe because using this. After compose standard model who use wireframe, because using front side and side 2D picture, enforce texture mapping and created 3D individual face model. Used front side of characteristic points and side part for correct mapping, after make face that have texture coordinates using 2D coordinate of front side image and front side characteristic points, constructed face that have texture coordinates using 2D coordinate of side image and side characteristic points.

Realistic 3D Scene Reconstruction from an Image Sequence (연속적인 이미지를 이용한 3차원 장면의 사실적인 복원)

  • Jun, Hee-Sung
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.183-188
    • /
    • 2010
  • A factorization-based 3D reconstruction system is realized to recover 3D scene from an image sequence. The image sequence is captured from uncalibrated perspective camera from several views. Many matched feature points over all images are obtained by feature tracking method. Then, these data are supplied to the 3D reconstruction module to obtain the projective reconstruction. Projective reconstruction is converted to Euclidean reconstruction by enforcing several metric constraints. After many triangular meshes are obtained, realistic reconstruction of 3D models are finished by texture mapping. The developed system is implemented in C++, and Qt library is used to implement the system user interface. OpenGL graphics library is used to realize the texture mapping routine and the model visualization program. Experimental results using synthetic and real image data are included to demonstrate the effectiveness of the developed system.

Text Region Extraction and OCR on Camera Based Images (카메라 영상 위에서의 문자 영역 추출 및 OCR)

  • Shin, Hyun-Kyung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.1
    • /
    • pp.59-66
    • /
    • 2010
  • Traditional OCR engines are designed to the scanned documents in calibrated environment. Three dimensional perspective distortion and smooth distortion in images are critical problems caused by un-calibrated devices, e.g. image from smart phones. To meet the growing demand of character recognition of texts embedded in the photos acquired from the non-calibrated hand-held devices, we address the problem in three categorical aspects: rotational invariant method of text region extraction, scale invariant method of text line segmentation, and three dimensional perspective mapping. With the integration of the methods, we developed an OCR for camera-captured images.

A Shadow Mapping Technique Separating Static and Dynamic Objects in Games using Multiple Render Targets (다중 렌더 타겟을 사용하여 정적 및 동적 오브젝트를 분리한 게임용 그림자 매핑 기법)

  • Lee, Dongryul;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.99-108
    • /
    • 2015
  • To identify the location of the object and improve the realism in 3D game, shadow mapping is widely used to compute the depth values of vertices in view of the light position. Since the depth value of the shadow map is calculated by the world coordinate, the depth values of the static object don't need to be updated. In this paper, (1) in order to improve the rendering speed, using multiple render targets the depth values of static objects stored only once are separated from those of dynamic objects stored each time. And (2) in order to improve the shadow quality in the quarter view 3D game, the position of the light is located close to dynamic objects traveled along the camera each time. The effectiveness of the proposed method is verified by the experiments according to the different static and dynamics object configuration in 3D game.

REMARKS ON GENERALIZED (α, β)-DERIVATIONS IN SEMIPRIME RINGS

  • Hongan, Motoshi;ur Rehman, Nadeem
    • Communications of the Korean Mathematical Society
    • /
    • v.32 no.3
    • /
    • pp.535-542
    • /
    • 2017
  • Let R be an associative ring and ${\alpha},{\beta}:R{\rightarrow}R$ ring homomorphisms. An additive mapping $d:R{\rightarrow}R$ is called an (${\alpha},{\beta}$)-derivation of R if $d(xy)=d(x){\alpha}(y)+{\beta}(x)d(y)$ is fulfilled for any $x,y{\in}R$, and an additive mapping $D:R{\rightarrow}R$ is called a generalized (${\alpha},{\beta}$)-derivation of R associated with an (${\alpha},{\beta}$)-derivation d if $D(xy)=D(x){\alpha}(y)+{\beta}(x)d(y)$ is fulfilled for all $x,y{\in}R$. In this note, we intend to generalize a theorem of Vukman [5], and a theorem of Daif and El-Sayiad [2].