• Title/Summary/Keyword: Local Images

Search Result 1,369, Processing Time 0.026 seconds

Quantitative Morphology of High Redshift Galaxies Using GALEX Ultraviolet Images of Nearby Galaxies

  • Yeom, Bum-Suk;Rey, Soo-Chang;Kim, Young-Kwang;Kim, Suk;Lee, Young-Dae
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.2
    • /
    • pp.73.1-73.1
    • /
    • 2011
  • An understanding of the ultraviolet (UV) properties of nearby galaxies is essential for interpreting images of high redshift systems. In this respect, the prediction of optical-band morphologies at high redshifts requires UV images of local galaxies with various morphologies. We present the simulated optical images of galaxies at high redshifts using diverse and high-quality UV images of nearby galaxies obtained through the Galaxy Evolution Explorer (GALEX). We measured CAS (concentration, asymmetry, clumpiness) as well as Gini/M20 parameters of galaxies at near-ultraviolet (NUV) and simulated optical images to quantify effects of redshift on the appearance of distant stellar systems. We also discuss the change of morphological parameters with redshift.

  • PDF

Contour-Based approach for mosaicking images that contain moving objects (움직이는 객체를 포함하는 영상의 컨투어 기반 모자이킹 방법)

  • 정성룡;최윤희;최태선
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.323-326
    • /
    • 2002
  • This paper has been studied how to deal with moving objects in images when we mosaic them. The global motion between two images is biased due to the local motion from these moving objects, so it is very important how to eliminate the effects of them. In this paper contour-based approach for mosaicking images that contains moving objects is presented. Once we get the contours of images we can both eliminate the moving objects and mosaic the images. In this stage, hierarchical moving objects elimination technique is introduced. Experiment is done for Stefan tennis sequences to verify the proposed algorithm.

  • PDF

Registration Method between High Resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 정합 기법)

  • Jeon, Hyeongju;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.739-747
    • /
    • 2018
  • Integration analysis of multi-sensor satellite images is becoming increasingly important. The first step in integration analysis is image registration between multi-sensor. SIFT (Scale Invariant Feature Transform) is a representative image registration method. However, optical image and SAR (Synthetic Aperture Radar) images are different from sensor attitude and radiation characteristics during acquisition, making it difficult to apply the conventional method, such as SIFT, because the radiometric characteristics between images are nonlinear. To overcome this limitation, we proposed a modified method that combines the SAR-SIFT method and shape descriptor vector DLSS(Dense Local Self-Similarity). We conducted an experiment using two pairs of Cosmo-SkyMed and KOMPSAT-2 images collected over Daejeon, Korea, an area with a high density of buildings. The proposed method extracted the correct matching points when compared to conventional methods, such as SIFT and SAR-SIFT. The method also gave quantitatively reasonable results for RMSE of 1.66m and 2.45m over the two pairs of images.

MSER-based Character detection using contrast differences in natural images (자연 이미지에서 명암차이를 이용한 MSER 기반의 문자 검출 기법)

  • Kim, Jun Hyeok;Lee, Sang Hun;Lee, Gang Seong;Kim, Ki Bong
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.27-34
    • /
    • 2019
  • In this paper, we propose a method to remove the background area by analyzing the pattern of the character area. In the character detection result of the MSER(Maximally Stable External Regions) method which distinguishes a region having a constant contrast background regions were detected. To solve this problem, we use the MSER method in natural images, the background is removed by calculating the change rate by searching the character area and the background area which are not different from the areas where the contrast values are different from each other. However, in the background removed image, using the LBP(Local Binary Patterns) method, the area with uniform values in the image was determined to be a character area and character detection was performed. Experiments were carried out with simple images with backgrounds, images with frontal characters, and images with slanted images. The proposed method has a high detection rate of 1.73% compared with the conventional MSER and MSER + LBP method.

A Realtime Road Weather Recognition Method Using Support Vector Machine (Support Vector Machine을 이용한 실시간 도로기상 검지 방법)

  • Seo, Min-ho;Youk, Dong-bin;Park, Sae-rom;Jun, Jin-ho;Park, Jung-hoon
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.6_2
    • /
    • pp.1025-1032
    • /
    • 2020
  • In this paper, we propose a method to classify road weather conditions into rain, fog, and sun using a SVM (Support Vector Machine) classifier after extracting weather features from images acquired in real time using an optical sensor installed on a roadside post. A multi-dimensional weather feature vector consisting of factors such as image sharpeness, image entropy, Michelson contrast, MSCN (Mean Subtraction and Contrast Normalization), dark channel prior, image colorfulness, and local binary pattern as global features of weather-related images was extracted from road images, and then a road weather classifier was created by performing machine learning on 700 sun images, 2,000 rain images, and 1,000 fog images. Finally, the classification performance was tested for 140 sun images, 510 rain images, and 240 fog images. Overall classification performance is assessed to be applicable in real road services and can be enhanced further with optimization along with year-round data collection and training.

A Preliminary Analysis on the Radiometric Difference Across the Level 1B Slot Images of GOCI-II (GOCI-II Level 1B 분할영상 간의 복사 편차에 대한 초기 분석)

  • Kim, Wonkook;Lim, Taehong;Ahn, Jae-hyun;Choi, Jong-kuk
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_2
    • /
    • pp.1269-1279
    • /
    • 2021
  • Geostationary Ocean Color Imager II (GOCI-II), which are now operated successfully since its launch in 2020, acquires local area images with 12 Level 1B slot images that are sequentially acquired in a 3×4 grid pattern. The boundary areas between the adjacent slots are prone to discontinuity in radiance, which becomes even more clear in the following Level 2 data, and this warrants the precise analysis and correction before the distribution. This study evaluates the relative radiometric biases between the adjacent slots images, by exploiting the overlapped areas across the images. Although it is ideal to derive the statistics from humongous images, this preliminary analysis uses just the scenes acquired at a specific time to understand its general behavior in terms of bias and variance in radiance. Level 1B images of February 21st, 2021 (UTC03 = noon in local time) were selected for the analysis based on the cloud cover, and the radiance statistics were calculated only with the ocean pixels. The results showed that the relative bias is 0~1% in all bands but Band 1 (380 nm), while Band 1 exhibited a larger bias (1~2%). Except for the Band 1 in slot pairs aligned North-South, biases in all direction and in all bands turned out to have biases in the opposite direction that the sun elevation would have caused.

Research on Local and Global Infrared Image Pre-Processing Methods for Deep Learning Based Guided Weapon Target Detection

  • Jae-Yong Baek;Dae-Hyeon Park;Hyuk-Jin Shin;Yong-Sang Yoo;Deok-Woong Kim;Du-Hwan Hur;SeungHwan Bae;Jun-Ho Cheon;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.7
    • /
    • pp.41-51
    • /
    • 2024
  • In this paper, we explore the enhancement of target detection accuracy in the guided weapon using deep learning object detection on infrared (IR) images. Due to the characteristics of IR images being influenced by factors such as time and temperature, it's crucial to ensure a consistent representation of object features in various environments when training the model. A simple way to address this is by emphasizing the features of target objects and reducing noise within the infrared images through appropriate pre-processing techniques. However, in previous studies, there has not been sufficient discussion on pre-processing methods in learning deep learning models based on infrared images. In this paper, we aim to investigate the impact of image pre-processing techniques on infrared image-based training for object detection. To achieve this, we analyze the pre-processing results on infrared images that utilized global or local information from the video and the image. In addition, in order to confirm the impact of images converted by each pre-processing technique on object detector training, we learn the YOLOX target detector for images processed by various pre-processing methods and analyze them. In particular, the results of the experiments using the CLAHE (Contrast Limited Adaptive Histogram Equalization) shows the highest detection accuracy with a mean average precision (mAP) of 81.9%.

Color Image Enhancement Based on an Improved Image Formation Model (개선된 영상 생성 모델에 기반한 칼라 영상 향상)

  • Choi, Doo-Hyun;Jang, Ick-Hoon;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.6 s.312
    • /
    • pp.65-84
    • /
    • 2006
  • In this paper, we present an improved image formation model and propose a color image enhancement based on the model. In the presented image formation model, an input image is represented as a product of global illumination, local illumination, and reflectance. In the proposed color image enhancement, an input RGB color image is converted into an HSV color image. Under the assumption of white-light illumination, the H and S component images are remained as they are and the V component image only is enhanced based on the image formation model. The global illumination is estimated by applying a linear LPF with wide support region to the input V component image and the local illumination by applying a JND (just noticeable difference)-based nonlinear LPF with narrow support region to the processed image, where the estimated global illumination is eliminated from the input V component image. The reflectance is estimated by dividing the input V component image by the estimated global and local illuminations. After performing the gamma correction on the three estimated components, the output V component image is obtained from their product. Histogram modeling is next executed such that the final output V component image is obtained. Finally an output RGB color image is obtained from the H and S component images of the input color image and the final output V component image. Experimental results for the test image DB built with color images downloaded from NASA homepage and MPEG-7 CCD color images show that the proposed method gives output color images of very well-increased global and local contrast without halo effect and color shift.

Subpixel Shift Estimation in Noisy Image Using Iterative Phase Correlation of A Selected Local Region (잡음 영상에서 국부 영역의 반복적인 위상 상관도를 이용한 부화소 이동량 추정방법)

  • Ha, Ho-Gun;Jang, In-Su;Ko, Kyung-Woo;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.103-119
    • /
    • 2010
  • In this paper, we propose a subpixel shift estimation method using phase correlation with a local region for the registration of noisy images. Phase correlation is commonly used to estimate the subpixel shift between images, which is derived from analyzing shifted and downsampled images. However, when the images are affected by additive white Gaussian noise and aliasing artifacts, the estimation error is increased. Thus, instead of using the whole image, the proposed method uses a specific local region that is less affect by noises. In addition, to improve the estimation accuracy, iterative phase correlation is applied between selected local regions rather than using a fitting function. the restricted range is determined by analyzing the maximum peak and the two adjacent values of the inverse Fourier transform of the normalized cross power spectrum. In the experiments, the proposed method shows higher accuracy in registering noisy images than the other methods. Thus, the edge-sharpness and clearness in the super-resolved image is also improved.

Non-homogeneous noise removal for side scan sonar images using a structural sparsity based compressive sensing algorithm (구조적 희소성 기반 압축 센싱 알고리즘을 통한 측면주사소나 영상의 비균일 잡음 제거)

  • Chen, Youngseng;Ku, Bonwha;Lee, Seungho;Kim, Seongil;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.37 no.1
    • /
    • pp.73-81
    • /
    • 2018
  • The quality of side scan sonar images is determined by the frequency of a sonar. A side scan sonar with a low frequency creates low-quality images. One of the factors that lead to low quality is a high-level noise. The noise is occurred by the underwater environment such as equipment noise, signal interference and so on. In addition, in order to compensate for the transmission loss of sonar signals, the received signal is recovered by TVG (Time-Varied Gain), and consequently the side scan sonar images contain non-homogeneous noise which is opposite to optic images whose noise is assumed as homogeneous noise. In this paper, the SSCS (Structural Sparsity based Compressive Sensing) is proposed for removing non-homogeneous noise. The algorithm incorporates both local and non-local models in a structural feature domain so that it guarantees the sparsity and enhances the property of non-local self-similarity. Moreover, the non-local model is corrected in consideration of non-homogeneity of noises. Various experimental results show that the proposed algorithm is superior to existing method.