• Title/Summary/Keyword: 맵 텍스쳐

Search Result 32, Processing Time 0.028 seconds

A Terrain Rendering Method using Roughness Map and Bias Map (거칠기맵과 편향맵을 이용한 지형 렌더링 가법)

  • Lee, Eun-Seok;Jo, In-Woo;Shin, Byeong-Seok
    • Journal of the Korea Computer Graphics Society
    • /
    • v.17 no.2
    • /
    • pp.1-9
    • /
    • 2011
  • In recent researches, several LOD techniques are used for real-time visualization of large sized terrain data. However, during mesh simplification, geometry popping may occur in consecutive frames, because of the geometric error. We propose an efficient method for reducing the geometry popping using roughness map and bias map. A roughness map and a bias map are used to move vertices of the terrain mesh to appropriate position where they minimize the geometry errors. A roughness map and a bias map are represented as a texture suitable for GPU processing. Moving vertices using bias map is processed on the GPU, so the high-speed visualization can be possible.

Object VR-based Virtual Textile Wearing System Using Textile Texture Mapping (직물 텍스쳐 매핑을 이용한 객체 VR 기반 가상 직물 착용 시스템)

  • Kwak, No-Yoon
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.239-247
    • /
    • 2012
  • This paper is related to an Object VR-based virtual textile wearing system carrying out textile texture mapping based on viewpoint vector estimation and intensity difference map. The proposed system is characterized as capable of virtually wearing a new textile pattern selected by the user to the clothing shape section segmented from multi-view 2D images of clothes model for Object VR(Object Virtual Reality), and three-dimensionally viewing its virtual wearing appearance at multi-view points of the object. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern with holding the properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi-automatic processing possible to reduce the manual works.

Enhanced Deep Feature Reconstruction : Texture Defect Detection and Segmentation through Preservation of Multi-scale Features (개선된 Deep Feature Reconstruction : 다중 스케일 특징의 보존을 통한 텍스쳐 결함 감지 및 분할)

  • Jongwook Si;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.369-377
    • /
    • 2023
  • In the industrial manufacturing sector, quality control is pivotal for minimizing defect rates; inadequate management can result in additional costs and production delays. This study underscores the significance of detecting texture defects in manufactured goods and proposes a more precise defect detection technique. While the DFR(Deep Feature Reconstruction) model adopted an approach based on feature map amalgamation and reconstruction, it had inherent limitations. Consequently, we incorporated a new loss function using statistical methodologies, integrated a skip connection structure, and conducted parameter tuning to overcome constraints. When this enhanced model was applied to the texture category of the MVTec-AD dataset, it recorded a 2.3% higher Defect Segmentation AUC compared to previous methods, and the overall defect detection performance was improved. These findings attest to the significant contribution of the proposed method in defect detection through the reconstruction of feature map combinations.

Texture mapping of 3D game graphics - characteristics of hand painted texture (3D게임그래픽의 텍스쳐 매핑-손맵의 특징)

  • Sohn, Jong-Nam;Han, Tae-Woo
    • Journal of Digital Convergence
    • /
    • v.13 no.11
    • /
    • pp.331-336
    • /
    • 2015
  • The texture mapping used for the low-polygon models is one of the important workflows in the graphical representation of the 3D game. Only one hand painted texture is mapped on the surface of the 3D model and represents the color of the material and visual sense of touching by itself in that process. In the 3D game graphics, it is very important to visualize the textile sensation such as protruding and denting. It can be interpreted by the Gestalt Law to recognize a plane as a 3D sense of volume. Moreover, the concept of Affordance is necessary to recognize and perceive the textile sensation. It means visual recognizing of that relationship in the learning process. In this paper, The questionnaire survey targeting 3D game graphic designers is carried out. By analyzing the survey results, we suggest the important characteristic in the process of making hand painted texture.

Model-based 3D Multiview Object Implementation by OpenGL (OpenGL을 이용한 모델기반 3D 다시점 영상 객체 구현)

  • Oh, Won-Sik;Kim, Dong-Wook;Kim, Hwa-Sung;Yoo, Ji-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2006.11a
    • /
    • pp.59-62
    • /
    • 2006
  • 본 논문에서는 OpenGL Rendering을 이용한 모델기반 3D 다시점 영상의 객체 구현을 위한 구성과 각 모듈에 적용되는 알고리즘에 대해 중점적으로 연구하였다. 한 장의 텍스쳐 이미지와 깊이 맵(Depth Map)을 가지고 다시점 객체를 생성하기 위해, 먼저 깊이 정보의 전처리 과정을 거친다. 전처리 된 깊이 정보는 OpenGL상에서의 일정 간격의 꼭지점(Vertex) 정보로 샘플링 된다. 샘플링 된 꼭지점 정보는 깊이 정보를 z값으로 가지는 3차원 공간 좌표상의 점이다. 이 꼭지점 정보를 기반으로 텍스쳐 맵핑 (texture mapping)을 위한 폴리곤(polygon)을 구성하기 위해 딜루이니 삼각화(Delaunay Triangulations) 알고리즘이 적용되었다. 이렇게 구성된 폴리곤 위에 텍스쳐 이미지를 맵핑하여 OpenGL의 좌표 연산을 통해 시점을 자유롭게 조정할 수 있는 객체를 만들었다. 제한된 하나의 이미지와 깊이 정보만을 가지고 좀 더 넓은 범위의 시점을 가지는 다시점 객체를 생성하기 위해, 새로운 꼭지점을 생성하여 폴리곤을 확장시켜 기존보다 더 넓은 시점을 확보할 수 있었다. 또한 렌더링된 모델의 경계 영역 부분의 깊이정보 평활화를 통해 시각적인 개선을 이룰 수 있었다.

  • PDF

Advanced Pre-Integrated BRDF for Realistic Transmission Light Color in Skin Rendering based on Unity3D (Unity3D기반 피부 투과광의 사실적 색표현을 위한 개선된 사전정의 BRDF)

  • Kim, Seong-Hoon;Moon, Yoon-Young;Choi, Jin-Woo;Yang, Young-Kyu;Han, Gi-Tae
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2014.04a
    • /
    • pp.840-843
    • /
    • 2014
  • 사실적 피부 렌더링은 피부 표면에서 일어나는 확산반사(Diffusion) 및 경면반사(Specular) 뿐 만 아니라 피부층 내에서 산란되어 나오는 산란광과 얇은 피부층을 통과하는 투과광 등을 고려하여 렌더링 되어야 한다. 이를 물리적인 개념들을 사용하여 실시간으로 계산하여 표현하는 것은 많은 계산량과 시간을 필요로 하므로 확산 반사 및 경면 반사 등을 미리 계산하여 텍스쳐로 저장하고 재사용하는 사전정의 BRDF 방법으로 근사화하여 표현할 수 있다. 하지만 사전정의 BRDF를 통해 생성된 피부 투과광색상 텍스쳐 맵은 그 색상이 고정되어있어 조명의 색상이 바뀌어도 피부를 투과하는 빛의 색상이 변하지 않아 부자연스러움을 보인다. 본 논문에서는 이러한 문제를 해결하기 위해 물체와 조명간의 거리를 이용하여 빛의 감쇠비율을 구하고 조명의 색상 값과 감쇠비율을 이용하여 피부 투과광 색상 텍스쳐 맵의 RGB채널 수정을 통해 피부 렌더링에서의 자연스러운 투과광 표현이 가능함을 보였다.

Image Warping Using Vector Field Based Deformation and Its Application to Texture Mapping (벡터장 기반 변형기술을 이용한 이미지 와핑 방법 : 텍스쳐 매핑에의 응용을 중심으로)

  • Seo, Hye-Won;Cordier, Frederic
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.36 no.5
    • /
    • pp.404-411
    • /
    • 2009
  • We introduce in this paper a new method for smooth foldover-free warping of images, based on the vector field deformation technique proposed by Von Funck et al. It allows users to specify the constraints in two different ways: positional constraints to constrain the position of a point in the image and gradient constraints to constrain the orientation and scaling of some parts of the image. From the user-specified constraints, it computes in the image domain a C1-continuous velocity vector field, along which each pixel progressively moves from its original position to the target. The target positions of the pixels are obtained by solving a set of partial derivative equations with the 4th order Runge-Kutta method. We show how our method can be useful for texture mapping with hard constraints. We start with an unconstrained planar embedding of a target mesh using a previously known method (Least Squares Conformal Map). Then, in order to obtain a texture map that satisfies the given constraints, we use the proposed warping method to align the features of the texture image with those on the unconstrained embedding. Compared to previous work, our method generates a smoother texture mapping, offers higher level of control for defining the constraints, and is simpler to implement.

Color Image Segmentation and Textile Texture Mapping of 2D Virtual Wearing System (2D 가상 착의 시스템의 컬러 영상 분할 및 직물 텍스쳐 매핑)

  • Lee, Eun-Hwan;Kwak, No-Yoon
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.5
    • /
    • pp.213-222
    • /
    • 2008
  • This paper is related to color image segmentation and textile texture mapping for the 2D virtual wearing system. The proposed system is characterized as virtually wearing a new textile pattern selected by user to the clothing shape section, based on its intensity difference map, segmented from a 2D clothes model image using color image segmentation technique. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern or color with holding the illumination and shading properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi-automatic processing possible to reduce the manual works to a minimum. According to the proposed system, it can motivate the creative activity of the designers with simulation results on the effect of textile pattern design on the appearance of clothes without manufacturing physical clothes and, as it can help the purchasers for decision-making with them, promote B2B or B2C e-commerce.

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

Stereo Vision-Based Obstacle Detection and Vehicle Verification Methods Using U-Disparity Map and Bird's-Eye View Mapping (U-시차맵과 조감도를 이용한 스테레오 비전 기반의 장애물체 검출 및 차량 검증 방법)

  • Lee, Chung-Hee;Lim, Young-Chul;Kwon, Soon;Lee, Jong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.47 no.6
    • /
    • pp.86-96
    • /
    • 2010
  • In this paper, we propose stereo vision-based obstacle detection and vehicle verification methods using U-disparity map and bird's-eye view mapping. First, we extract a road feature using maximum frequent values in each row and column. And we extract obstacle areas on the road using the extracted road feature. To extract obstacle areas exactly we utilize U-disparity map. We can extract obstacle areas exactly on the U-disparity map using threshold value which consists of disparity value and camera parameter. But there are still multiple obstacles in the extracted obstacle areas. Thus, we perform another processing, namely segmentation. We convert the extracted obstacle areas into a bird's-eye view using camera modeling and parameters. We can segment obstacle areas on the bird's-eye view robustly because obstacles are represented on it according to ranges. Finally, we verify the obstacles whether those are vehicles or not using various vehicle features, namely road contacting, constant horizontal length, aspect ratio and texture information. We conduct experiments to prove the performance of our proposed algorithms in real traffic situations.