• Title/Summary/Keyword: 2D 투영 변환

Search Result 42, Processing Time 0.023 seconds

3D Mesh Watermarking Using Projection onto Convex Sets (볼록 집합 투영 기법을 이용한 3D 메쉬 워터마킹)

  • Lee Suk-Hwan;Kwon Seong-Geun;Kwon Ki-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.2 s.308
    • /
    • pp.81-92
    • /
    • 2006
  • This paper proposes a robustness watermarking for 3D mesh model based on projection onto convex sets (POCS). After designing the convex sets for robustness and invisibility among some requirements for watermarking system, a 3D-mesh model is projected alternatively onto two constraints convex sets until the convergence condition is satisfied. The robustness convex set are designed for embedding the watermark into the distance distribution of the vertices to robust against the attacks, such as mesh simplification, cropping, rotation, translation, scaling, and vertex randomization. The invisibility convex set are designed for the embedded watermark to be invisible. The decision values and index that the watermark was embedded with are used to extract the watermark without the original model. Experimental results verify that the watermarked mesh model has invisibility and robustness against the attacks, such as translation, scaling, mesh simplification, cropping, and vertex randomization.

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

UV Mapping Based Pose Estimation of Furniture Parts in Assembly Manuals (UV-map 기반의 신경망 학습을 이용한 조립 설명서에서의 부품의 자세 추정)

  • Kang, Isaac;Cho, Nam Ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.667-670
    • /
    • 2020
  • 최근에는 증강현실, 로봇공학 등의 분야에서 객체의 위치 검출 이외에도, 객체의 자세에 대한 추정이 요구되고 있다. 객체의 자세 정보가 포함된 데이터셋은 위치 정보만 포함된 데이터셋에 비하여 상대적으로 매우 적기 때문에 인공 신경망 구조를 활용하기 어려운 측면이 있으나, 최근에 들어서는 기계학습 기반의 자세 추정 알고리즘들이 여럿 등장하고 있다. 본 논문에서는 이 가운데 Dense 6d Pose Object detector (DPOD) [11]의 구조를 기반으로 하여 가구의 조립 설명서에 그려진 가구 부품들의 자세를 추정하고자 한다. DPOD [11]는 입력으로 RGB 영상을 받으며, 해당 영상에서 자세를 추정하고자 하는 객체의 영역에 해당하는 픽셀들을 추정하고, 객체의 영역에 해당되는 각 픽셀에서 해당 객체의 3D 모델의 UV map 값을 추정한다. 이렇게 픽셀 개수만큼의 2D - 3D 대응이 생성된 이후에는, RANSAC과 PnP 알고리즘을 통해 RGB 영상에서의 객체와 객체의 3D 모델 간의 변환 관계 행렬이 구해지게 된다. 본 논문에서는 사전에 정해진 24개의 자세 후보들을 기반으로 가구 부품의 3D 모델을 2D에 투영한 RGB 영상들로 인공 신경망을 학습하였으며, 평가 시에는 실제 조립 설명서에서의 가구 부품의 자세를 추정하였다. 실험 결과 IKEA의 Stefan 의자 조립 설명서에 대하여 100%의 ADD score를 얻었으며, 추정 자세가 자세 후보군 중 정답 자세에 가장 근접한 경우를 정답으로 평가했을 때 100%의 정답률을 얻었다. 제안하는 신경망을 사용하였을 때, 가구 조립 설명서에서 가구 부품의 위치를 찾는 객체 검출기(object detection network)와, 각 개체의 종류를 구분하는 객체 리트리벌 네트워크(retrieval network)를 함께 사용하여 최종적으로 가구 부품의 자세를 추정할 수 있다.

  • PDF

3-D Pose Estimation of an Elliptic Object Using Two Coplanar Points (두 개의 공면점을 활용한 타원물체의 3차원 위치 및 자세 추정)

  • Kim, Heon-Hui;Park, Kwang-Hyun;Ha, Yun-Su
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.23-35
    • /
    • 2012
  • This paper presents a 3-D pose (position and orientation) estimation method for an elliptic object in 3-D space. It is difficult to resolve the problem of determining 3-D pose parameters with respect to an elliptic feature in 3-D space by interpretation of its projected feature onto an image plane. As an alternative, we propose a two points-based pose estimation algorithm to seek the 3-D information of an elliptic feature. The proposed algorithm determines a homogeneous transformation uniquely for a given correspondence set of an ellipse and two coplanar points that are defined on model and image plane, respectively. For each plane, two triangular features are extracted from an ellipse and two points based on the polarity in 2-D projection space. A planar homography is first estimated by the triangular feature correspondences, then decomposed into 3-D pose parameters. The proposed method is evaluated through a series of experiments for analyzing the errors of 3-D pose estimation and the sensitivity with respect to point locations.

An Indoor Pose Estimation System Based on Recognition of Circular Ring Patterns (원형 링 패턴 인식에 기반한 실내용 자세추정 시스템)

  • Kim, Heon-Hui;Ha, Yun-Su
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.36 no.4
    • /
    • pp.512-519
    • /
    • 2012
  • This paper proposes a 3-D pose (positions and orientations) estimation system based on the recognition of circular ring patterns. To deal with monocular vision-based pose estimation problem, we specially design a circular ring pattern that has a simplicity merit in view of object recognition. A pose estimation procedure is described in detail, which utilizes the geometric transformation of a circular ring pattern in 2-D perspective projection space. The proposed method is evaluated through the analysis of accuracy and precision with respect to 3-D pose estimation of a quadrotor-type vehicle in 3-D space.

Grid Noise Removal in Computed Radiography Images Using the Combined Wavelet Packet-Fourier Method (CR영상에서 웨이블릿 패킷-푸리에 방법을 이용한 그리드 잡음 제거)

  • Lee, A Young;Kim, Dong Youn
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.11
    • /
    • pp.175-182
    • /
    • 2012
  • The scattered radiation always occurs when X-ray strikes the object. To absorb the scattered X-rays, the antiscatter grids are used, however these grids images are superimposed in the projection radiography images. When those images are displayed on the monitor, moir$\acute{e}$ patterns are overlapped over the images and disturb the anatomical informations. Most of the researches performed to date removed the grid noises by calculating or observing those frequencies in one dimensional frequency domain, two dimensional wavelet transform or Fourier transform. Those methods filtered not only the grid noises but also diagnostic informations. In this paper, we proposed the combined wavelet packet-Fourier method to remove the grid artifact in CR images. For the phantom image, the proposed method achieved from 5.2 to 7.4 dB better than others in SNR and for CR images by rejecting the grid noise bands effectively while leaving the remaining bands unchanged, the loss of images could get minimal results.

Automatic Mask Generation for 3D Makeup Simulation (3차원 메이크업 시뮬레이션을 위한 자동화된 마스크 생성)

  • Kim, Hyeon-Joong;Kim, Jeong-Sik;Choi, Soo-Mi
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.397-402
    • /
    • 2008
  • 본 논문에서는 햅틱 인터랙션 기반의 3차원 가상 얼굴 메이크업 시뮬레이션에서 메이크업 대상에 대한 정교한 페인팅을 적용하기 위한 자동화된 마스크 생성 방법을 개발한다. 본 연구에서는 메이크업 시뮬레이션 이전의 전처리 과정에서 마스크를 생성한다. 우선, 3차원 스캐너 장치로부터 사용자의 얼굴 텍스쳐 이미지와 3차원 기하 표면 모델을 획득한다. 획득된 얼굴 텍스쳐 이미지로부터 AdaBoost 알고리즘, Canny 경계선 검출 방법과 색 모델 변환 방법 등의 영상처리 알고리즘들을 적용하여 마스크 대상이 되는 주요 특정 영역(눈, 입술)들을 결정하고 얼굴 이미지로부터 2차원 마스크 영역을 결정한다. 이렇게 생성된 마스크 영역 이미지는 3차원 표면 기하 모델에 투영되어 최종적인 3차원 특징 영역의 마스크를 레이블링하는데 사용된다. 이러한 전처리 과정을 통하여 결정된 마스크는 햅틱 장치와 스테레오 디스플레이기반의 가상 인터페이스를 통해서 자연스러운 메이크업 시뮬레이션을 수행하는데 사용된다. 본 연구에서 개발한 방법은 사용자에게 전처리 과정에서의 어떠한 개입 없이 자동적으로 메이크업 대상이 되는 마스크 영역을 결정하여 정교하고 손쉬운 메이크업 페인팅 인터페이스를 제공한다.

  • PDF

Geometry Padding for Segmented Sphere Projection (SSP) in 360 Video (360 비디오의 SSP 를 위한 기하학적 패딩)

  • Myeong, Sang-Jin;Kim, Hyun-Ho;Yoon, Yong-Uk;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.143-144
    • /
    • 2018
  • 360 비디오는 VR 응용의 확산과 함께 몰입형 미디어로 주목 받고 있으며, JVET(Joint Video Experts Team)에서 post-HEVC 로 진행중인 VVC(Versatile Video Coding)에 360 비디오 부호화도 함께 고려하고 있다. 360 비디오 부호화를 위하여 변환된 2D 영상은 투영 면(face) 간의 불연속성과 비활성 영역이 존재할 수 있으며 이는 부호화 효율을 저하시키는 원인이 된다. 본 논문에서는 SSP(Segmented Projection)에서의 이러한 불연속성과 비활성 영역을 줄이는 효율적인 기하학적 패딩(padding) 기법을 제시한다. 실험결과 제안 기법은 복사에 의한 패딩을 사용하는 기존 SSP 대비 주관적 화질이 향상된 것을 확인 할 수 있었다.

  • PDF

Development of Quality Assurance Software for $PRESAGE^{REU}$ Gel Dosimetry ($PRESAGE^{REU}$ 겔 선량계의 분석 및 정도 관리 도구 개발)

  • Cho, Woong;Lee, Jaegi;Kim, Hyun Suk;Wu, Hong-Gyun
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.233-241
    • /
    • 2014
  • The aim of this study is to develop a new software tool for 3D dose verification using $PRESAGE^{REU}$ Gel dosimeter. The tool included following functions: importing 3D doses from treatment planning systems (TPS), importing 3D optical density (OD), converting ODs to doses, 3D registration between two volumetric data by translational and rotational transformations, and evaluation with 3D gamma index. To acquire correlation between ODs and doses, CT images of a $PRESAGE^{REU}$ Gel with cylindrical shape was acquired, and a volumetric modulated arc therapy (VMAT) plan was designed to give radiation doses from 1 Gy to 6 Gy to six disk-shaped virtual targets along z-axis. After the VMAT plan was delivered to the targets, 3D OD data were reconstructed from 512 projection data from $Vista^{TM}$ optical CT scanner (Modus Medical Devices Inc, Canada) per every 2 hours after irradiation. A curve for converting ODs to doses was derived by comparing TPS dose profile to OD profile along z-axis, and the 3D OD data were converted to the absorbed doses using the curve. Supra-linearity was observed between doses and ODs, and the ODs were decayed about 60% per 24 hours depending on their magnitudes. Measured doses from the $PRESAGE^{REU}$ Gel were well agreed with the TPS doses at central region, but large under-doses were observed at peripheral region at the cylindrical geometry. Gamma passing rate for 3D doses was 70.36% under the gamma criteria of 3% of dose difference and 3 mm of distance to agreement. The low passing rate was resulted from the mismatching of the refractive index between the PRESAGE gel and oil bath in the optical CT scanner. In conclusion, the developed software was useful for 3D dose verification from PRESAGE gel dosimetry, but further improvement of the Gel dosimetry system were required.

Microsoft Kinect-based Indoor Building Information Model Acquisition (Kinect(RGB-Depth Camera)를 활용한 실내 공간 정보 모델(BIM) 획득)

  • Kim, Junhee;Yoo, Sae-Woung;Min, Kyung-Won
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.31 no.4
    • /
    • pp.207-213
    • /
    • 2018
  • This paper investigates applicability of Microsoft $Kinect^{(R)}$, RGB-depth camera, to implement a 3D image and spatial information for sensing a target. The relationship between the image of the Kinect camera and the pixel coordinate system is formulated. The calibration of the camera provides the depth and RGB information of the target. The intrinsic parameters are calculated through a checker board experiment and focal length, principal point, and distortion coefficient are obtained. The extrinsic parameters regarding the relationship between the two Kinect cameras consist of rotational matrix and translational vector. The spatial images of 2D projection space are converted to a 3D images, resulting on spatial information on the basis of the depth and RGB information. The measurement is verified through comparison with the length and location of the 2D images of the target structure.