• Title/Summary/Keyword: Visual Resolution

Search Result 400, Processing Time 0.025 seconds

Analysis of Spatial Resolution Characteristics for DMC/UlatraCamXp/ADS80 Digital Aerial Image Based on Visual Method (시각적 기법에 의한 DMC/UlatraCamXp/ADS80 디지털 항공영상의 공간해상도 특성 분석)

  • Lee, Tae Yun;Lee, Jae One
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.1
    • /
    • pp.61-68
    • /
    • 2016
  • Digital aerial images have been commonly used in a large scale map production owing to their excellent geometry, and high spatial and radiometric resolution in recent years. However, a quality verification process for acquired images should be preceded in order to secure the high precision and reliability of produced results. Several experimental studies to verify digital imaging systems have been vigorously researched by constructing permanent test field in abroad. On the other hand, it is urgently necessary to suggest a practical scheme for an image quality verification, because this related study and experiment are still in its early stage at home. Hence, this study aims to present an easy method to measure the spatial resolution of the image in a visual way using a portable Siemens star. The images used in the study were obtained with three different cameras, two frame array sensors of DMC, UltraCamXp and a linear array sensor of ADS80. The Siemens star target appeared in every image is extracted and then the spatial resolution of image is compared with theoretical GSD(Ground Sample Distance) by a visual method. In addition, the change of spatial resolution depending on the location of the Siemens star from image center and flight direction and cross-flight direction is also compared and analyzed. As study results, while the theoretical GSDs of images taken with each camera are about 6~9cm, the visual resolutions are 1.2~1.3 times as great as the theoretical ones.

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

Acquisition of Subcentimeter GSD Images Using UAV and Analysis of Visual Resolution (UAV를 이용한 Subcentimeter GSD 영상의 취득 및 시각적 해상도 분석)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.563-572
    • /
    • 2017
  • The purpose of the study is to investigate the effect of flight height, flight speed, exposure time of camera shutter and autofocusing on the visual resolution of the image in order to obtain ultra-high resolution images with a GSD less than 1cm. It is also aimed to evaluate the ease of recognition of various types of aerial targets. For this purpose, we measured the visual resolution using a 7952*5304 pixel 35mm CMOS sensor and a 55mm prime lens at 20m intervals from 20m to 120m above ground. As a result, with automatic focusing, the visual resolution is measured 1.1~1.6 times as the theoretical GSD, and without automatic focusing, 1.5~3.5 times. Next, the camera was shot at 80m above ground at a constant flight speed of 5m/s, while reducing the exposure time by 1/2 from 1/60sec to 1/2000sec. Assuming that blur is allowed within 1 pixel, the visual resolution is 1.3~1.5 times larger than the theoretical GSD when the exposure time is kept within the longest exposure time, and 1.4~3.0 times larger when it is not kept. If the aerial targets are printed on A4 paper and they are shot within 80m above ground, the encoded targets can be recognized automatically by commercial software, and various types of general targets and coded ones can be manually recognized with ease.

Fast Super-Resolution Algorithm Based on Dictionary Size Reduction Using k-Means Clustering

  • Jeong, Shin-Cheol;Song, Byung-Cheol
    • ETRI Journal
    • /
    • v.32 no.4
    • /
    • pp.596-602
    • /
    • 2010
  • This paper proposes a computationally efficient learning-based super-resolution algorithm using k-means clustering. Conventional learning-based super-resolution requires a huge dictionary for reliable performance, which brings about a tremendous memory cost as well as a burdensome matching computation. In order to overcome this problem, the proposed algorithm significantly reduces the size of the trained dictionary by properly clustering similar patches at the learning phase. Experimental results show that the proposed algorithm provides superior visual quality to the conventional algorithms, while needing much less computational complexity.

Three-Dimensional Photon Counting Imaging with Enhanced Visual Quality

  • Lee, Jaehoon;Lee, Min-Chul;Cho, Myungjin
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.180-187
    • /
    • 2021
  • In this paper, we present a computational volumetric reconstruction method for three-dimensional (3D) photon counting imaging with enhanced visual quality when low-resolution elemental images are used under photon-starved conditions. In conventional photon counting imaging with low-resolution elemental images, it may be difficult to estimate the 3D scene correctly because of a lack of scene information. In addition, the reconstructed 3D images may be blurred because volumetric computational reconstruction has an averaging effect. In contrast, with our method, the pixels of the elemental image rearrangement technique and a Bayesian approach are used as the reconstruction and estimation methods, respectively. Therefore, our method can enhance the visual quality and estimation accuracy of the reconstructed 3D images because it does not have an averaging effect and uses prior information about the 3D scene. To validate our technique, we performed optical experiments and demonstrated the reconstruction results.

Comparison of Fusion Methods for Generating 250m MODIS Image

  • Kim, Sun-Hwa;Kang, Sung-Jin;Lee, Kyu-Sung
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.3
    • /
    • pp.305-316
    • /
    • 2010
  • The MODerate Resolution Imaging Spectroradiometer (MODIS) sensor has 36 bands at 250m, 500m, 1km spatial resolution. However, 500m or 1km MODIS data exhibits a few limitations when low resolution data is applied at small areas that possess complex land cover types. In this study, we produce seven 250m spectral bands by fusing two MODIS 250m bands into five 500m bands. In order to recommend the best fusion method by which one acquires MODIS data, we compare seven fusion methods including the Brovey transform, principle components algorithm (PCA) fusion method, the Gram-Schmidt fusion method, the least mean and variance matching method, the least square fusion method, the discrete wavelet fusion method, and the wavelet-PCA fusion method. Results of the above fusion methods are compared using various evaluation indicators such as correlation, relative difference of mean, relative variation, deviation index, peak signal-to-noise ratio index and universal image quality index, as well as visual interpretation method. Among various fusion methods, the local mean and variance matching method provides the best fusion result for the visual interpretation and the evaluation indicators. The fusion algorithm of 250m MODIS data may be used to effectively improve the accuracy of various MODIS land products.

Stage-GAN with Semantic Maps for Large-scale Image Super-resolution

  • Wei, Zhensong;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3942-3961
    • /
    • 2019
  • Recently, the models of deep super-resolution networks can successfully learn the non-linear mapping from the low-resolution inputs to high-resolution outputs. However, for large scaling factors, this approach has difficulties in learning the relation of low-resolution to high-resolution images, which lead to the poor restoration. In this paper, we propose Stage Generative Adversarial Networks (Stage-GAN) with semantic maps for image super-resolution (SR) in large scaling factors. We decompose the task of image super-resolution into a novel semantic map based reconstruction and refinement process. In the initial stage, the semantic maps based on the given low-resolution images can be generated by Stage-0 GAN. In the next stage, the generated semantic maps from Stage-0 and corresponding low-resolution images can be used to yield high-resolution images by Stage-1 GAN. In order to remove the reconstruction artifacts and blurs for high-resolution images, Stage-2 GAN based post-processing module is proposed in the last stage, which can reconstruct high-resolution images with photo-realistic details. Extensive experiments and comparisons with other SR methods demonstrate that our proposed method can restore photo-realistic images with visual improvements. For scale factor ${\times}8$, our method performs favorably against other methods in terms of gradients similarity.

Free-view Pixels of Elemental Image Rearrangement Technique (FPERT)

  • Lee, Jaehoon;Cho, Myungjin;Inoue, Kotaro;Tashiro, Masaharu;Lee, Min-Chul
    • Journal of information and communication convergence engineering
    • /
    • v.17 no.1
    • /
    • pp.60-66
    • /
    • 2019
  • In this paper, we propose a new free-view three-dimensional (3D) computational reconstruction of integral imaging to improve the visual quality of reconstructed 3D images when low-resolution elemental images are used. In a conventional free-view reconstruction, the visual quality of the reconstructed 3D images is insufficient to provide 3D information to applications because of the shift and sum process. In addition, its processing speed is slow. To solve these problems, our proposed method uses a pixel rearrangement technique (PERT) with locally selective elemental images. In general, PERT can reconstruct 3D images with a high visual quality at a fast processing speed. However, PERT cannot provide a free-view reconstruction. Therefore, using our proposed method, free-view reconstructed 3D images with high visual qualities can be generated when low-resolution elemental images are used. To show the feasibility of our proposed method, we applied it to optical experiments.

Super Resolution Image Reconstruction using the Maximum A-Posteriori Method

  • Kwon Hyuk-Jong;Kim Byung-Guk
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.115-118
    • /
    • 2004
  • Images with high resolution are desired and often required in many visual applications. When resolution can not be improved by replacing sensors, either because of cost or hardware physical limits, super resolution image reconstruction method is what can be resorted to. Super resolution image reconstruction method refers to image processing algorithms that produce high quality and high resolution images from a set of low quality and low resolution images. The method is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, including satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. The method can be either the frequency domain approach or the spatial domain approach. Much of the earlier works concentrated on the frequency domain formulation, but as more general degradation models were considered, later researches had been almost exclusively on spatial domain formulations. The method in spatial domains has three stages: i) motion estimate or image registration, ii) interpolation onto high resolution grid and iii) deblurring process. The super resolution grid construction in the second stage was discussed in this paper. We applied the Maximum A­Posteriori(MAP) reconstruction method that is one of the major methods in the super resolution grid construction. Based on this method, we reconstructed high resolution images from a set of low resolution images and compared the results with those from other known interpolation methods.

  • PDF

Application of Deep Learning to Solar Data: 6. Super Resolution of SDO/HMI magnetograms

  • Rahman, Sumiaya;Moon, Yong-Jae;Park, Eunsu;Jeong, Hyewon;Shin, Gyungin;Lim, Daye
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.52.1-52.1
    • /
    • 2019
  • The Helioseismic and Magnetic Imager (HMI) is the instrument of Solar Dynamics Observatory (SDO) to study the magnetic field and oscillation at the solar surface. The HMI image is not enough to analyze very small magnetic features on solar surface since it has a spatial resolution of one arcsec. Super resolution is a technique that enhances the resolution of a low resolution image. In this study, we use a method for enhancing the solar image resolution using a Deep-learning model which generates a high resolution HMI image from a low resolution HMI image (4 by 4 binning). Deep learning networks try to find the hidden equation between low resolution image and high resolution image from given input and the corresponding output image. In this study, we trained a model based on a very deep residual channel attention networks (RCAN) with HMI images in 2014 and test it with HMI images in 2015. We find that the model achieves high quality results in view of both visual and measures: 31.40 peak signal-to-noise ratio(PSNR), Correlation Coefficient (0.96), Root mean square error (RMSE) is 0.004. This result is much better than the conventional bi-cubic interpolation. We will apply this model to full-resolution SDO/HMI and GST magnetograms.

  • PDF