• Title/Summary/Keyword: Image-to-Image Translation

Search Result 303, Processing Time 0.029 seconds

3D Model Retrieval Using Sliced Shape Image (단면 형상 영상을 이용한 3차원 모델 검색)

  • Park, Yu-Sin;Seo, Yung-Ho;Yun, Yong-In;Kwon, Jun-Sik;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.27-37
    • /
    • 2008
  • Applications of 3D data increase with advancement of multimedia technique and contents, and it is necessary to manage and to retrieve for 3D data efficiently. In this paper, we propose a new method using the sliced shape which extracts efficiently a feature description for shape-based retrieval of 3D models. Since the feature descriptor of 3D model should be invariant to translation, rotation and scale for its model, normalization of models requires for 3D model retrieval system. This paper uses principal component analysis(PCA) method in order to normalize all the models. The proposed algorithm finds a direction of each axis by the PCA and creates orthogonal n planes in each axis. These planes are orthogonalized with each axis, and are used to extract sliced shape image. Sliced shape image is the 2D plane created by intersecting at between 3D model and these planes. The proposed feature descriptor is a distribution of Euclidean distances from center point of sliced shape image to its outline. A performed evaluation is used for average of the normalize modified retrieval rank(ANMRR) with a standard evaluation from MPEG-7. In our experimental results, we demonstrate that the proposed method is an efficient 3D model retrieval.

A SHAPE FEATURE EXTRACTION FOR COMPLEX TOPOGRAPHICAL IMAGES

  • Kwon Yong-Il;Park Ho-Hyun;Lee Seok-Lyong;Chung Chin-Wan
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.575-578
    • /
    • 2005
  • Topographical images, in case of aerial or satellite images, are usually similar in colors and textures, and complex in shapes. Thus we have to use shape features of images for efficiently retrieving a query image from topographical image databases. In this paper, we propose a shape feature extraction method which is suitable for topographical images. This method, which improves the existing projection in the Cartesian coordinates, performs the projection operation in the polar coordinates. This method extracts three attributes, namely the number of region pixels, the boundary pixel length of the region from the centroid, the number of alternations between region and background, along each angular direction of the polar coordinates. It extracts the features of complex shape objects which may have holes and disconnected regions. An advantage of our method is that it is invariant to rotation/scale/translation of images. Finally we show the advantages of our method through experiments by comparing it with CSS which is one of the most successful methods in the area of shape feature extraction

  • PDF

DiLO: Direct light detection and ranging odometry based on spherical range images for autonomous driving

  • Han, Seung-Jun;Kang, Jungyu;Min, Kyoung-Wook;Choi, Jungdan
    • ETRI Journal
    • /
    • v.43 no.4
    • /
    • pp.603-616
    • /
    • 2021
  • Over the last few years, autonomous vehicles have progressed very rapidly. The odometry technique that estimates displacement from consecutive sensor inputs is an essential technique for autonomous driving. In this article, we propose a fast, robust, and accurate odometry technique. The proposed technique is light detection and ranging (LiDAR)-based direct odometry, which uses a spherical range image (SRI) that projects a three-dimensional point cloud onto a two-dimensional spherical image plane. Direct odometry is developed in a vision-based method, and a fast execution speed can be expected. However, applying LiDAR data is difficult because of the sparsity. To solve this problem, we propose an SRI generation method and mathematical analysis, two key point sampling methods using SRI to increase precision and robustness, and a fast optimization method. The proposed technique was tested with the KITTI dataset and real environments. Evaluation results yielded a translation error of 0.69%, a rotation error of 0.0031°/m in the KITTI training dataset, and an execution time of 17 ms. The results demonstrated high precision comparable with state-of-the-art and remarkably higher speed than conventional techniques.

Image Translation of SDO/AIA Multi-Channel Solar UV Images into Another Single-Channel Image by Deep Learning

  • Lim, Daye;Moon, Yong-Jae;Park, Eunsu;Lee, Jin-Yi
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.42.3-42.3
    • /
    • 2019
  • We translate Solar Dynamics Observatory/Atmospheric Imaging Assembly (AIA) ultraviolet (UV) multi-channel images into another UV single-channel image using a deep learning algorithm based on conditional generative adversarial networks (cGANs). The base input channel, which has the highest correlation coefficient (CC) between UV channels of AIA, is 193 Å. To complement this channel, we choose two channels, 1600 and 304 Å, which represent upper photosphere and chromosphere, respectively. Input channels for three models are single (193 Å), dual (193+1600 Å), and triple (193+1600+304 Å), respectively. Quantitative comparisons are made for test data sets. Main results from this study are as follows. First, the single model successfully produce other coronal channel images but less successful for chromospheric channel (304 Å) and much less successful for two photospheric channels (1600 and 1700 Å). Second, the dual model shows a noticeable improvement of the CC between the model outputs and Ground truths for 1700 Å. Third, the triple model can generate all other channel images with relatively high CCs larger than 0.89. Our results show a possibility that if three channels from photosphere, chromosphere, and corona are selected, other multi-channel images could be generated by deep learning. We expect that this investigation will be a complementary tool to choose a few UV channels for future solar small and/or deep space missions.

  • PDF

How Image Quality Affects Determination of Target Displacement When Using kV Cone-beam Computed Tomography (CBCT) (kV Cone-beam CT를 사용한 치료준비에서 재구성 영상의 품질이 표적 위치 결정에 미치는 영향)

  • Oh, Seung-Jong;Kim, Si-Yong;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.17 no.4
    • /
    • pp.207-211
    • /
    • 2006
  • The advent of kV cone-beam computed tomography (CBCT) integrated with a linear accelerator allows for more accurate Image-guided radiotherapy (IGRT). IGRT is the technique that corrects target displacement based on internal body information. To do this, the CBCT Image set is acquired just before the beam is delivered and registered with the simulation CT Image set. In this study, we compare the registration results according to the CBCT's reconstruction quality (either high or medium). A total of 56 CBCT projection data from 6 patients were analyzed. The translation vector differences were within 1 mm in all but 3 cases. For rotation displacement difference, components of all three axes were considered and 3 out of 168 ($56{\times}3$ axes) cases showed more than lo of rotation differences.

  • PDF

Robust Watermarking Scheme Against Geometrical Attacks Using Alignment of Image Features (영상특징 정렬을 이용한 기하학적 공격에 강인한 워터마킹 기법)

  • Ko Yun-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.5
    • /
    • pp.624-634
    • /
    • 2006
  • This paper presents a new watermarking scheme that is robust against geometrical attacks such as translation and rotation. The proposed method is based on the conventional PSADT(Polar Coordinates Shape Adaptive Discrete Transform) method which is an robust watermarking scheme for an arbitrarily-shaped image such as character images. The PSADT method shows perfect robustness against geometrical attack if there is no change in the shape of the image object. However, it cannot be utilized to watermark general rectangular images because of the missing alignment between the watermarked signals in the embedding and extracting side. To overcome this problem we propose a new watermarking scheme that aligns the watermark signal using the image inherent feature, especially corner. Namely the proposed method decides a consistent target region whose shape and position isn't changed by any malicious attack and then embeds the watermark in it using the PSADT method. Experimental results show the robustness of the proposed method against geometrical attacks as well as image compression.

  • PDF

Robust 2-D Object Recognition Using Bispectrum and LVQ Neural Classifier

  • HanSoowhan;woon, Woo-Young
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.255-262
    • /
    • 1998
  • This paper presents a translation, rotation and scale invariant methodology for the recognition of closed planar shape images using the bispectrum of a contour sequence and the learning vector quantization(LVQ) neural classifier. The contour sequences obtained from the closed planar images represent the Euclidean distance between the centroid and all boundary pixels of the shape, and are related to the overall shape of the images. The higher order spectra based on third order cumulants is applied to tihs contour sample to extract fifteen bispectral feature vectors for each planar image. There feature vector, which are invariant to shape translation, rotation and scale transformation, can be used to represent two0dimensional planar images and are fed into a neural network classifier. The LVQ architecture is chosen as a neural classifier because the network is easy and fast to train, the structure is relatively simple. The experimental recognition processes with eight different hapes of aircraft images are presented to illustrate the high performance of this proposed method even the target images are significantly corrupted by noise.

  • PDF

Fuzzy Mean Method with Bispectral Features for Robust 2D Shape Classification

  • Woo, Young-Woon;Han, Soo-Whan
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.313-320
    • /
    • 1999
  • In this paper, a translation, rotation and scale invariant system for the classification of closed 2D images using the bispectrum of a contour sequence and the weighted fuzzy mean method is derived and compared with the classification process using one of the competitive neural algorithm, called a LVQ(Learning Vector Quantization). The bispectrun based on third order cumulants is applied to the contour sequences of the images to extract fifteen feature vectors for each planar image. These bispectral feature vectors, which are invariant to shape translation, rotation and scale transformation, can be used to represent two-dimensional planar images and are fed into an classifier using weighted fuzzy mean method. The experimental processes with eight different shapes of aircraft images are presented to illustrate the high performance of the proposed classifier.

  • PDF

A Model-based 3-D Pose Estimation Method from Line Correspondences of Polyhedral Objects

  • Kang, Dong-Joong;Ha, Jong-Eun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.762-766
    • /
    • 2003
  • In this paper, we present a new approach to solve the problem of estimating the camera 3-D location and orientation from a matched set of 3-D model and 2-D image features. An iterative least-square method is used to solve both rotation and translation simultaneously. Because conventional methods that solved for rotation first and then translation do not provide good solutions, we derive an error equation using roll-pitch-yaw angle to present the rotation matrix. To minimize the error equation, Levenberg-Marquardt algorithm is introduced with uniform sampling strategy of rotation space to avoid stuck in local minimum. Experimental results using real images are presented.

  • PDF

A study on the geometric correction for the digital subtraction radiograph (디지털 공제방사선영상의 기하학적 보정에 관한 연구)

  • Lim Suk-Young;Koh Kwang-Joon
    • Imaging Science in Dentistry
    • /
    • v.31 no.1
    • /
    • pp.23-34
    • /
    • 2001
  • Purpose : To develop a new subtraction program for registering digital periapical images based on the correspondence of anatomic structures. Materials and Methods: The digital periapical images were obtained by Digora system with Rinn XCP equipment after translation of 1-16 mm, and rotation of 2-20° at the premolar and molar areas of the human dried mandible. The new subtraction program, NIH Image program and Emago/Advanced program were compared by the peak-signal-to noise ratio (PSNR). Results : The new subtraction program was superior to NIH Images program and Emagol Advanced program up to 16 mm translation and horizontal angulation up to 4°. Conclusion: The new subtraction program can be used for subtracting digital periapical images.

  • PDF