• Title/Summary/Keyword: 텍스쳐 분할

Search Result 113, Processing Time 0.04 seconds

2D to 3D Conversion Using The Machine Learning-Based Segmentation And Optical Flow (학습기반의 객체분할과 Optical Flow를 활용한 2D 동영상의 3D 변환)

  • Lee, Sang-Hak
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.3
    • /
    • pp.129-135
    • /
    • 2011
  • In this paper, we propose the algorithm using optical flow and machine learning-based segmentation for the 3D conversion of 2D video. For the segmentation allowing the successful 3D conversion, we design a new energy function, where color/texture features are included through machine learning method and the optical flow is also introduced in order to focus on the regions with the motion. The depth map are then calculated according to the optical flow of segmented regions, and left/right images for the 3D conversion are produced. Experiment on various video shows that the proposed method yields the reliable segmentation result and depth map for the 3D conversion of 2D video.

Incremental EM algorithm with multiresolution kd-trees and cluster validation and its application to image segmentation (다중해상도 kd-트리와 클러스터 유효성을 이용한 점증적 EM 알고리즘과 이의 영상 분할에의 적용)

  • Lee, Kyoung-Mi
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.6
    • /
    • pp.523-528
    • /
    • 2015
  • In this paper, we propose a new multiresolutional and dynamic approach of the EM algorithm. EM is a very popular and powerful clustering algorithm. EM, however, has problems that indexes multiresolution data and requires a priori information on a proper number of clusters in many applications, To solve such problems, the proposed EM algorithm can impose a multiresolution kd-tree structure in the E-step and allocates a cluster based on sequential data. To validate clusters, we use a merge criteria for cluster merging. We demonstrate the proposed EM algorithm outperforms for texture image segmentation.

A Mesh Segmentation Reflecting Global and Local Geometric Characteristics (전역 및 국부 기하 특성을 반영한 메쉬 분할)

  • Im, Jeong-Hun;Park, Young-Jin;Seong, Dong-Ook;Ha, Jong-Sung;Yoo, Kwan-Hee
    • The KIPS Transactions:PartA
    • /
    • v.14A no.7
    • /
    • pp.435-442
    • /
    • 2007
  • This paper is concerned with the mesh segmentation problem that can be applied to diverse applications such as texture mapping, simplification, morphing, compression, and shape matching for 3D mesh models. The mesh segmentation is the process of dividing a given mesh into the disjoint set of sub-meshes. We propose a method for segmenting meshes by simultaneously reflecting global and local geometric characteristics of the meshes. First, we extract sharp vertices over mesh vertices by interpreting the curvatures and convexity of a given mesh, which are respectively contained in the local and global geometric characteristics of the mesh. Next, we partition the sharp vertices into the $\kappa$ number of clusters by adopting the $\kappa$-means clustering method [29] based on the Euclidean distances between all pairs of the sharp vertices. Other vertices excluding the sharp vertices are merged into the nearest clusters by Euclidean distances. Also we implement the proposed method and visualize its experimental results on several 3D mesh models.

Object VR-based Virtual Textile Wearing System Using Textile Texture Mapping (직물 텍스쳐 매핑을 이용한 객체 VR 기반 가상 직물 착용 시스템)

  • Kwak, No-Yoon
    • Journal of Digital Convergence
    • /
    • v.10 no.8
    • /
    • pp.239-247
    • /
    • 2012
  • This paper is related to an Object VR-based virtual textile wearing system carrying out textile texture mapping based on viewpoint vector estimation and intensity difference map. The proposed system is characterized as capable of virtually wearing a new textile pattern selected by the user to the clothing shape section segmented from multi-view 2D images of clothes model for Object VR(Object Virtual Reality), and three-dimensionally viewing its virtual wearing appearance at multi-view points of the object. Regardless of color or intensity of model clothes, the proposed system is possible to virtually change the textile pattern with holding the properties of the selected clothing shape section, and also to quickly and easily simulate, compare, and select multiple textile pattern combinations for individual styles or entire outfits. The proposed system can provide higher practicality and easy-to-use interface, as it makes real-time processing possible in various digital environment, and creates comparatively natural and realistic virtual wearing styles, and also makes semi-automatic processing possible to reduce the manual works.

Changes of Texture, Soluble Solids and Protein during Cooking of Soybeans (콩의 조리과정 중 텍스쳐, 고형물 및 단백질의 변화)

  • Kim, Young-Ok;Jung, Hae-Ok;Rhee, Chong-Ouk
    • Korean Journal of Food Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.192-198
    • /
    • 1990
  • Texture, losses of total solids and proteins o) soybeans were studied during cooking at $100-135^{\circ}C$. The textural changes were measured using the puncture probe with an Instron Universal Testing Machine, and changes in microstructure of beans were observed with scanning electron microscopy during the cooking. The major effect observed was a breakdown of the cell walls and appearance of the protein bodies with soaking process. As the cooking time at $100^{\circ}C$ is longer, the separation of cells and changes in cell shape could be seen in the sample. The greater amounts of soluble solids were leached out with longer coo king time from the beans.

  • PDF

Occlusion Handling in Generating Multi-view Images Using Image Inpainting (영상 인페인팅을 이용한 다중 시점 영상 생성시의 가려짐 영역 처리)

  • Kim, Yong-Jin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.173-176
    • /
    • 2008
  • 본 논문에서는 한 장의 기준 영상과 그에 상응하는 참 깊이 맵을 이용하여 가상의 다중 시점 영상 생성 시 발생하는 가려짐 영역 보간 방법을 제안한다. 이 방법은 영상 인페인팅 기술과 각각의 깊이 정도에 따른 층별 보간 기술을 이용한다. 우선, 기준영상을 깊이 정보에 따라 여러 개의 층으로 분할한다. 각각의 층에 대해 가려짐 영역 내의 화소들은 영상 인페인팅 기술을 이용하여 보간한다. 마지막 단계 에서 개별적으로 보간 된 층 영상들은 하나로 합성되어 가상 시점의 영상을 이룬다. 영상을 깊이 정보에 따라 분할함으로써, 각 깊이 정도에 대한 텍스쳐의 연관성을 보존하며 보간 할 수 있으므로 기존의 방법에 비하여 보다 정확하고 세밀한 가려짐 영역 보간이 가능하다. 본 논문에서는 여러 가지 실험 결과를 통하여 제안한 방법의 효율성을 입증하였다.

  • PDF

Stereo matching using dynamic programming and image segments (동적 계획법과 이미지 세그먼트를 이용한 스테레오 정합)

  • Dong Won-Pyo;Jeong Chang-Sung
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.805-807
    • /
    • 2005
  • 본 논문에서는 동적 계획법(dynamic programming)과 이미지 세그먼트(segment)를 이용한 새로운 스테레오 정합(stereo matching)기법을 제안한다. 일반적으로 동적 계획법(dynamic programming)은 빠르면서도 비교적 정확하고, 조밀(dense)한 disparity map을 얻을 수 있다. 그러나 경계(boundary)근처의 폐색지역(occlusion region)이나, 텍스쳐가 적은 모호한 영역에서는 잘못된 결과를 유도할 수 있다. 본 논문에서는 이러한 문제점들을 해결하기 위해 먼저 이미지를 아주 작은 영역으로 분할(over-segmentation)하고, 이런 작은 영역들이 비슷한 disparity를 가질 것이라고 가정한다. 다음으로 동적 계획법(dynamic programming)을 통해 정합을 수행한다. 여기서 계산비용(cost)은 기존의 정합윈도우 안에서 세그먼트 영역을 적용한 새로운 비용함수를 사용하며, 이 새로운 비용함수를 통해 정확도를 높인다. 마지막으로 동적 계획법을 통하여 얻어진 조밀한 disparity map을 세그먼트 영역들의 시각특성(visibility)과 유사도(similarity)를 이용하여 에러를 찾아내고, 세그먼트 정합을 통해 수정함으로 정확한 disparity map을 찾아낸다.

  • PDF

Dancing Avatar: You can dance like PSY too (춤추는 아바타: 당신도 싸이처럼 춤을 출 수 있다.)

  • Gu, Dongjun;Joo, Youngdon;Vu, Van Manh;Lee, Jungwoo;Ahn, Heejune
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.256-259
    • /
    • 2021
  • 본 논문에서는 사람을 키넥트로 촬영하여 3 차원 아바타로 복원하여 연예인처럼 춤을 추게 하는 기술을 설계 구현하였다. 기존의 순수 딥러닝 기반 방식과 달리 본 기술은 3 차원 인체 모델을 사용하여 안정적이고 자유로운 결과를 얻을 수 있다. 우선 인체 모델의 기하학적 정보는 3 차원 조인트를 사용하여 추정하고 DensePose를 통하여 정교한 텍스쳐를 복원한다. 여기에 3 차원 포인트-클라우드와 ICP 매칭 기법을 사용하여 의상 모델 정보를 복원한다. 이렇게 확보한 신체 모델과 의상 모델을 사용한 아바타는 신체 모델의 rigged 특성을 그대로 유지함으로써 애니메이션에 적합하여 PSY 의 <강남스타일>과 같은 춤을 자연스럽게 표현하였다. 개선할 점으로 인체와 의류 부분의 좀 더 정확한 분할과 분할과정에서 발생할 수 있는 노이즈의 제거 등을 확인되었다.

  • PDF

Quality Characteristics of Barley ${\beta}$-Glucan Enriched Noodles (보리 ${\beta}$-glucan 강화 국수의 품질 특성)

  • Lee, Young-Tack;Jung, Ji-Young
    • Korean Journal of Food Science and Technology
    • /
    • v.35 no.3
    • /
    • pp.405-409
    • /
    • 2003
  • This study was conducted to evaluate quality characteristics of noodles containing barley flour and ${\beta}$-glucan enriched fraction. Compared to 100% wheat flour, composite flours containing barley flour and ${\beta}$-glucan enriched fraction decreased initial pasting temperature and increased maximum peak viscosity. The noodles containing ${\beta}$-glucan enriched fraction exhibited somewhat darker color and lower values in cooked weight, volume, moisture content, and cooking loss. From the textural properties measured by texture analyzer, the noodles with 30% barley flour and ${\beta}$-glucan enriched fraction were similar to 100% wheat noodle in springiness value and significantly higher in gumminess, hardness, and chewiness. The results of sensory evaluation indicated that barley flour or ${\beta}$-glucan enriched fraction at levels up to 30% could be substituted for wheat flour without seriously depressing noodle quality. Cooking of raw noodle with ${\beta}$-glucan enrichment slightly increased total, insoluble, and soluble ${\beta}$-glucan content.

Poisson Video Composition Using Shape Matching (형태 정합을 이용한 포아송 동영상 합성)

  • Heo, Gyeongyong;Choi, Hun;Kim, Jihong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.4
    • /
    • pp.617-623
    • /
    • 2018
  • In this paper, we propose a novel seamless video composition method based on shape matching and Poisson equation. Video composition method consists of video segmentation process and video blending process. In the video segmentation process, the user first sets a trimap for the first frame, and then performs a grab-cut algorithm. Next, considering that the performance of video segmentation may be reduced if the color, brightness and texture of the object and the background are similar, the object region segmented in the current frame is corrected through shape matching between the objects of the current frame and the previous frame. In the video blending process, the object of source video and the background of target video are blended seamlessly using Poisson equation, and the object is located according to the movement path set by the user. Simulation results show that the proposed method has better performance not only in the naturalness of the composite video but also in computational time.