• 제목/요약/키워드: Low Resolution Feature

검색결과 145건 처리시간 0.02초

작물의 저해상도 이미지에 대한 3차원 복원에 관한 연구 (Study on Three-dimension Reconstruction to Low Resolution Image of Crops)

  • 오장석;홍형길;윤해룡;조용준;우성용;송수환;서갑호;김대희
    • 한국기계가공학회지
    • /
    • 제18권8호
    • /
    • pp.98-103
    • /
    • 2019
  • A more accurate method of feature point extraction and matching for three-dimensional reconstruction using low-resolution images of crops is proposed herein. This method is important in basic computer vision. In addition to three-dimensional reconstruction from exact matching, map-making and camera location information such as simultaneous localization and mapping can be calculated. The results of this study suggest applicable methods for low-resolution images that produce accurate results. This is expected to contribute to a system that measures crop growth condition.

Sparse Representation based Two-dimensional Bar Code Image Super-resolution

  • Shen, Yiling;Liu, Ningzhong;Sun, Han
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권4호
    • /
    • pp.2109-2123
    • /
    • 2017
  • This paper presents a super-resolution reconstruction method based on sparse representation for two-dimensional bar code images. Considering the features of two-dimensional bar code images, Kirsch and LBP (local binary pattern) operators are used to extract the edge gradient and texture features. Feature extraction is constituted based on these two features and additional two second-order derivatives. By joint dictionary learning of the low-resolution and high-resolution image patch pairs, the sparse representation of corresponding patches is the same. In addition, the global constraint is exerted on the initial estimation of high-resolution image which makes the reconstructed result closer to the real one. The experimental results demonstrate the effectiveness of the proposed algorithm for two-dimensional bar code images by comparing with other reconstruction algorithms.

An Implementation of Change Detection System for High-resolution Satellite Imagery using a Floating Window

  • Lim, Young-Jae;Jeong, Soo;Kim, Kyung-Ok
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2002년도 Proceedings of International Symposium on Remote Sensing
    • /
    • pp.275-279
    • /
    • 2002
  • Change Detection is a useful technology that can be applied to various fields, taking temporal change information with the comparison and analysis among multi-temporal satellite images. Especially, Change Detection that utilizes high-resolution satellite imagery can be implemented to extract useful change information for many purposes, such as the environmental inspection, the circumstantial analysis of disaster damage, the inspection of illegal building, and the military use, which cannot be achieved by low- or middle-resolution satellite imagery. However, because of the special characteristics that result from high-resolution satellite imagery, it cannot use a pixel-based method that is used for low-resolution satellite imagery. Therefore, it must be used a feature-based algorithm based on the geographical and morphological feature. This paper presents the system that builds the change map by digitizing the boundary of the changed object. In this system, we can make the change map using manual or semi-automatic digitizing through the user interface implemented with a floating window that enables to detect the sign of the change, such as the construction or dismantlement, more efficiently.

  • PDF

Content-Based Image Retrieval Using Combined Color and Texture Features Extracted by Multi-resolution Multi-direction Filtering

  • Bu, Hee-Hyung;Kim, Nam-Chul;Moon, Chae-Joo;Kim, Jong-Hwa
    • Journal of Information Processing Systems
    • /
    • 제13권3호
    • /
    • pp.464-475
    • /
    • 2017
  • In this paper, we present a new texture image retrieval method which combines color and texture features extracted from images by a set of multi-resolution multi-direction (MRMD) filters. The MRMD filter set chosen is simple and can be separable to low and high frequency information, and provides efficient multi-resolution and multi-direction analysis. The color space used is HSV color space separable to hue, saturation, and value components, which are easily analyzed as showing characteristics similar to the human visual system. This experiment is conducted by comparing precision vs. recall of retrieval and feature vector dimensions. Images for experiments include Corel DB and VisTex DB; Corel_MR DB and VisTex_MR DB, which are transformed from the aforementioned two DBs to have multi-resolution images; and Corel_MD DB and VisTex_MD DB, transformed from the two DBs to have multi-direction images. According to the experimental results, the proposed method improves upon the existing methods in aspects of precision and recall of retrieval, and also reduces feature vector dimensions.

Multi-resolution Fusion Network for Human Pose Estimation in Low-resolution Images

  • Kim, Boeun;Choo, YeonSeung;Jeong, Hea In;Kim, Chung-Il;Shin, Saim;Kim, Jungho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권7호
    • /
    • pp.2328-2344
    • /
    • 2022
  • 2D human pose estimation still faces difficulty in low-resolution images. Most existing top-down approaches scale up the target human bonding box images to the large size and insert the scaled image into the network. Due to up-sampling, artifacts occur in the low-resolution target images, and the degraded images adversely affect the accurate estimation of the joint positions. To address this issue, we propose a multi-resolution input feature fusion network for human pose estimation. Specifically, the bounding box image of the target human is rescaled to multiple input images of various sizes, and the features extracted from the multiple images are fused in the network. Moreover, we introduce a guiding channel which induces the multi-resolution input features to alternatively affect the network according to the resolution of the target image. We conduct experiments on MS COCO dataset which is a representative dataset for 2D human pose estimation, where our method achieves superior performance compared to the strong baseline HRNet and the previous state-of-the-art methods.

3D 공간상에서의 주변 기울기 정보를 기반에 둔 필터 학습을 통한 MRI 영상 초해상화 (MRI Image Super Resolution through Filter Learning Based on Surrounding Gradient Information in 3D Space)

  • 박성수;김윤수;감진규
    • 한국멀티미디어학회논문지
    • /
    • 제24권2호
    • /
    • pp.178-185
    • /
    • 2021
  • Three-dimensional high-resolution magnetic resonance imaging (MRI) provides fine-level anatomical information for disease diagnosis. However, there is a limitation in obtaining high resolution due to the long scan time for wide spatial coverage. Therefore, in order to obtain a clear high-resolution(HR) image in a wide spatial coverage, a super-resolution technology that converts a low-resolution(LR) MRI image into a high-resolution is required. In this paper, we propose a super-resolution technique through filter learning based on information on the surrounding gradient information in 3D space from 3D MRI images. In the learning step, the gradient features of each voxel are computed through eigen-decomposition from 3D patch. Based on these features, we get the learned filters that minimize the difference of intensity between pairs of LR and HR images for similar features. In test step, the gradient feature of the patch is obtained for each voxel, and the filter is applied by selecting a filter corresponding to the feature closest to it. As a result of learning 100 T1 brain MRI images of HCP which is publicly opened, we showed that the performance improved by up to about 11% compared to the traditional interpolation method.

Resolution-independent Up-sampling for Depth Map Using Fractal Transforms

  • Liu, Meiqin;Zhao, Yao;Lin, Chunyu;Bai, Huihui;Yao, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권6호
    • /
    • pp.2730-2747
    • /
    • 2016
  • Due to the limitation of the bandwidth resource and capture resolution of depth cameras, low resolution depth maps should be up-sampled to high resolution so that they can correspond to their texture images. In this paper, a novel depth map up-sampling algorithm is proposed by exploiting the fractal internal self-referential feature. Fractal parameters which are extracted from a depth map, describe the internal self-referential feature of the depth map, do not introduce inherent scale and just retain the relational information of the depth map, i.e., fractal transforms provide a resolution-independent description for depth maps and could up-sample depth maps to an arbitrary high resolution. Then, an enhancement method is also proposed to further improve the performance of the up-sampled depth map. The experimental results demonstrate that better quality of synthesized views is achieved both on objective and subjective performance. Most important of all, arbitrary resolution depth maps can be obtained with the aid of the proposed scheme.

Color-Image Guided Depth Map Super-Resolution Based on Iterative Depth Feature Enhancement

  • Lijun Zhao;Ke Wang;Jinjing, Zhang;Jialong Zhang;Anhong Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제17권8호
    • /
    • pp.2068-2082
    • /
    • 2023
  • With the rapid development of deep learning, Depth Map Super-Resolution (DMSR) method has achieved more advanced performances. However, when the upsampling rate is very large, it is difficult to capture the structural consistency between color features and depth features by these DMSR methods. Therefore, we propose a color-image guided DMSR method based on iterative depth feature enhancement. Considering the feature difference between high-quality color features and low-quality depth features, we propose to decompose the depth features into High-Frequency (HF) and Low-Frequency (LF) components. Due to structural homogeneity of depth HF components and HF color features, only HF color features are used to enhance the depth HF features without using the LF color features. Before the HF and LF depth feature decomposition, the LF component of the previous depth decomposition and the updated HF component are combined together. After decomposing and reorganizing recursively-updated features, we combine all the depth LF features with the final updated depth HF features to obtain the enhanced-depth features. Next, the enhanced-depth features are input into the multistage depth map fusion reconstruction block, in which the cross enhancement module is introduced into the reconstruction block to fully mine the spatial correlation of depth map by interleaving various features between different convolution groups. Experimental results can show that the two objective assessments of root mean square error and mean absolute deviation of the proposed method are superior to those of many latest DMSR methods.

이질적 얼굴인식을 위한 심층 정준상관분석을 이용한 지역적 얼굴 특징 학습 방법 (Local Feature Learning using Deep Canonical Correlation Analysis for Heterogeneous Face Recognition)

  • 최여름;김형일;노용만
    • 한국멀티미디어학회논문지
    • /
    • 제19권5호
    • /
    • pp.848-855
    • /
    • 2016
  • Face recognition has received a great deal of attention for the wide range of applications in real-world scenario. In this scenario, mismatches (so called heterogeneity) in terms of resolution and illumination between gallery and test face images are inevitable due to the different capturing conditions. In order to deal with the mismatch problem, we propose a local feature learning method using deep canonical correlation analysis (DCCA) for heterogeneous face recognition. By the DCCA, we can effectively reduce the mismatch between the gallery and the test face images. Furthermore, the proposed local feature learned by the DCCA is able to enhance the discriminative power by using facial local structure information. Through the experiments on two different scenarios (i.e., matching near-infrared to visible face images and matching low-resolution to high-resolution face images), we could validate the effectiveness of the proposed method in terms of recognition accuracy using publicly available databases.

Application of Multi-Class AdaBoost Algorithm to Terrain Classification of Satellite Images

  • Nguyen, Ngoc-Hoa;Woo, Dong-Min
    • 전기전자학회논문지
    • /
    • 제18권4호
    • /
    • pp.536-543
    • /
    • 2014
  • Terrain classification is still a challenging issue in image processing, especially with high resolution satellite images. The well-known obstacles include low accuracy in the detection of targets, especially for the case of man-made structures, such as buildings and roads. In this paper, we present an efficient approach to classify and detect building footprints, foliage, grass and road from high resolution grayscale satellite images. Our contribution is to build a strong classifier using AdaBoost based on a combination of co-occurrence and Haar-like features. We expect that the inclusion of Harr-like feature improves the classification performance of the man-made structures, since Haar-like feature is extracted from corner features and rectangle features. Also, the AdaBoost algorithm selects only critical features and generates an extremely efficient classifier. Experimental result indicates that the classification accuracy of AdaBoost classifier is much higher than that of the conventional classifier using back propagation algorithm. Also, the inclusion of Harr-like feature significantly improves the classification accuracy. The accuracy of the proposed method is 98.4% for the target detection and 92.8% for the classification on high resolution satellite images.