• Title/Summary/Keyword: Real scene image

Search Result 223, Processing Time 0.026 seconds

Change Detection in Land-Cover Pattern Using Region Growing Segmentation and Fuzzy Classification

  • Lee Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.1
    • /
    • pp.83-89
    • /
    • 2005
  • This study utilized a spatial region growing segmentation and a classification using fuzzy membership vectors to detect the changes in the images observed at different dates. Consider two co-registered images of the same scene, and one image is supposed to have the class map of the scene at the observation time. The method performs the unsupervised segmentation and the fuzzy classification for the other image, and then detects the changes in the scene by examining the changes in the fuzzy membership vectors of the segmented regions in the classification procedure. The algorithm was evaluated with simulated images and then applied to a real scene of the Korean Peninsula using the KOMPSAT-l EOC images. In the expertments, the proposed method showed a great performance for detecting changes in land-cover.

Scene-based Nonuniformity Correction by Deep Neural Network with Image Roughness-like and Spatial Noise Cost Functions

  • Hong, Yong-hee;Song, Nam-Hun;Kim, Dae-Hyeon;Jun, Chan-Won;Jhee, Ho-Jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.6
    • /
    • pp.11-19
    • /
    • 2019
  • In this paper, a new Scene-based Nonuniformity Correction (SBNUC) method is proposed by applying Image Roughness-like and Spatial Noise cost functions on deep neural network structure. The classic approaches for nonuniformity correction require generally plenty of sequential image data sets to acquire accurate image correction offset coefficients. The proposed method, however, is able to estimate offset from only a couple of images powered by the characteristic of deep neural network scheme. The real world SWIR image set is applied to verify the performance of proposed method and the result shows that image quality improvement of PSNR 70.3dB (maximum) is achieved. This is about 8.0dB more than the improved IRLMS algorithm which preliminarily requires precise image registration process on consecutive image frames.

A Remote Sensing Scene Classification Model Based on EfficientNetV2L Deep Neural Networks

  • Aljabri, Atif A.;Alshanqiti, Abdullah;Alkhodre, Ahmad B.;Alzahem, Ayyub;Hagag, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.406-412
    • /
    • 2022
  • Scene classification of very high-resolution (VHR) imagery can attribute semantics to land cover in a variety of domains. Real-world application requirements have not been addressed by conventional techniques for remote sensing image classification. Recent research has demonstrated that deep convolutional neural networks (CNNs) are effective at extracting features due to their strong feature extraction capabilities. In order to improve classification performance, these approaches rely primarily on semantic information. Since the abstract and global semantic information makes it difficult for the network to correctly classify scene images with similar structures and high interclass similarity, it achieves a low classification accuracy. We propose a VHR remote sensing image classification model that uses extracts the global feature from the original VHR image using an EfficientNet-V2L CNN pre-trained to detect similar classes. The image is then classified using a multilayer perceptron (MLP). This method was evaluated using two benchmark remote sensing datasets: the 21-class UC Merced, and the 38-class PatternNet. As compared to other state-of-the-art models, the proposed model significantly improves performance.

Deep Unsupervised Learning for Rain Streak Removal using Time-varying Rain Streak Scene (시간에 따라 변화하는 빗줄기 장면을 이용한 딥러닝 기반 비지도 학습 빗줄기 제거 기법)

  • Cho, Jaehoon;Jang, Hyunsung;Ha, Namkoo;Lee, Seungha;Park, Sungsoon;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.1-9
    • /
    • 2019
  • Single image rain removal is a typical inverse problem which decomposes the image into a background scene and a rain streak. Recent works have witnessed a substantial progress on the task due to the development of convolutional neural network (CNN). However, existing CNN-based approaches train the network with synthetically generated training examples. These data tend to make the network bias to the synthetic scenes. In this paper, we present an unsupervised framework for removing rain streaks from real-world rainy images. We focus on the natural phenomena that static rainy scenes capture a common background but different rain streak. From this observation, we train siamese network with the real rain image pairs, which outputs identical backgrounds from the pairs. To train our network, a real rainy dataset is constructed via web-crawling. We show that our unsupervised framework outperforms the recent CNN-based approaches, which are trained by supervised manner. Experimental results demonstrate that the effectiveness of our framework on both synthetic and real-world datasets, showing improved performance over previous approaches.

3D Model Extraction Method Using Compact Genetic Algorithm from Real Scene Stereoscopic Image (소형 유전자 알고리즘을 이용한 스테레오 영상으로부터의 3차원 모델 추출기법)

  • Han, Gyu-Pil;Eom, Tae-Eok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.538-547
    • /
    • 2001
  • Currently, 2D real-time image coding techniques had great developments and many related products were commercially developed. However, these techniques lack the capability of handling 3D actuality, occurred by the advent of virtual reality, because they handle only the temporal transmission for 2D image. Besides, many 3D virtual reality researches have been studied in computer graphics. Since the graphical researches were limited to the application of artificial models, the 3D actuality for real scene images could not be managed also. Therefore, a new 3D model extraction method based on stereo vision, that can deal with real scene virtual reality, is proposed in this paper. The proposed method adapted a compact genetic algorithm using population-based incremental learning (PBIL) to matching environments, in order to reduce memory consumption and computational time of conventional genetic algorithms. Since the PBIL used a probability vector and competitive learning, the matching algorithm became simple and the computation load was considerably reduced. Moreover, the matching quality was superior than conventional methods. Even if the characteristics of images are changed, stable outputs were obtained without the modification of the matching algorithm.

  • PDF

Density Change Adaptive Congestive Scene Recognition Network

  • Jun-Hee Kim;Dae-Seok Lee;Suk-Ho Lee
    • International journal of advanced smart convergence
    • /
    • v.12 no.4
    • /
    • pp.147-153
    • /
    • 2023
  • In recent times, an absence of effective crowd management has led to numerous stampede incidents in crowded places. A crucial component for enhancing on-site crowd management effectiveness is the utilization of crowd counting technology. Current approaches to analyzing congested scenes have evolved beyond simple crowd counting, which outputs the number of people in the targeted image to a density map. This development aligns with the demands of real-life applications, as the same number of people can exhibit vastly different crowd distributions. Therefore, solely counting the number of crowds is no longer sufficient. CSRNet stands out as one representative method within this advanced category of approaches. In this paper, we propose a crowd counting network which is adaptive to the change in the density of people in the scene, addressing the performance degradation issue observed in the existing CSRNet(Congested Scene Recognition Network) when there are changes in density. To overcome the weakness of the CSRNet, we introduce a system that takes input from the image's information and adjusts the output of CSRNet based on the features extracted from the image. This aims to improve the algorithm's adaptability to changes in density, supplementing the shortcomings identified in the original CSRNet.

Real-Time Shadow Generation Using Image-Based Rendering Technique (영상기반 렌더링 기법을 이용한 실시간 그림자 생성)

  • Lee, Jung-Yeon;Im, In-Seong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.7 no.1
    • /
    • pp.27-35
    • /
    • 2001
  • Shadows are important elements in producing a realistic image. In rendering. generation of the exact shape and position of shadow is crucial in providing the user with visual cues on the scene. While the shadow map technique quickly generates a shadow for the scene wherein objects and light sources are fixed. it gets slow down as they start to move. In this paper. we apply an image-based rendering technique to generate shadows in real-time using graphics hardware. Due to the heavy requirement of storage for a shadow map repository. we use a wavelet-based compression scheme for effective compression. Our method will be efficiently used in generating realistic scenes in many real-time applications such as 3D games and virtual reality systems.

  • PDF

A Real Time Processing Technique for Content-Aware Video Scaling (내용기반 동영상 기하학적 변환을 위한 실시간 처리 기법)

  • Lee, Kang-Hee;Yoo, Jae-Wook;Park, Dae-Hyun;Kim, Yoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.80-89
    • /
    • 2011
  • In this paper, a new real time video scaling technique which preserved the contents of a movie was proposed. Because in a movie a correlation exists between consecutive frames, in this paper by determining the seam of the current frame considering the seam of the previous frame, it was proposed the real time video scaling technique without the shaking phenomenon of the contents even though the entire video is not analyzed. For this purpose, frames which have similar features in a movie are classified into a scene, and the first frame of a scene is resized by the seam carving at the static images so that it can preserve the contents of the image to the utmost. At this time, the information about the seam extracted to convert the image size is saved, and the sizes of the next frames are controlled with reference to the seam information stored in the previous frame by each frame. The proposed algorithm has the fast processing speed of the extent of being similar to a bilinear method and preserves the main content of an image to the utmost at the same time. Also because the memory usage is remarkably small compared with the existing seam carving method, the proposed algorithm is usable in the mobile terminal in which there are many memory restrictions. Computer simulation results indicate that the proposed technique provides better objective performance and subjective image quality about the real time processing and shaking phenomenon removal and contents conservation than conventional algorithms.

Video Content Manipulation Using 3D Analysis for MPEG-4

  • Sull, Sanghoon
    • Journal of Broadcast Engineering
    • /
    • v.2 no.2
    • /
    • pp.125-135
    • /
    • 1997
  • This paper is concerned with realistic mainpulation of content in video sequences. Manipulation of content in video sequences is one of the content-based functionalities for MPEG-4 Visual standard. We present an approach to synthesizing video sequences by using the intermediate outputs of three-dimensional (3D) motion and depth analysis. For concreteness, we focus on video showing 3D motion of an observer relative to a scene containing planar runways (or roads). We first present a simple runway (or road) model. Then, we describe a method of identifying the runway (or road) boundary in the image using the Point of Heading Direction (PHD) which is defined as the image of, the ray along which a camera moves. The 3D motion of the camera is obtained from one of the existing 3D analysis methods. Then, a video sequence containing a runway is manipulated by (i) coloring the scene part above a vanishing line, say blue, to show sky, (ii) filling in the occluded scene parts, and (iii) overlaying the identified runway edges and placing yellow disks in them, simulating lights. Experimental results for a real video sequence are presented.

  • PDF

Acoustooptical Approach for Moving Scene Holography

  • Petrov, Vladimir
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2003.07a
    • /
    • pp.451-462
    • /
    • 2003
  • At the paper the method of 3D holographic moving image reconstruction is discused. The main idea of this method is based on the substitution of optically created static hologram by equal diffraction array created by acoustical (AO) field which formed by bulk sound waves. Such sound field can be considered as dynamic optical hologram, which is electrically controlled. At the certain moment of time when the whole hologram already formed, the reference optical beam illuminates it, and due to acoustooptical interaction the original optical image is reconstructed. As the acoustically created dynamic optical hologram is electronically controlled, it can be used for moving 3-dimentional scene reconstruction in real time. The architecture of holographic display for moving scene reconstruction is presented at this paper. The calculated variant of such display laboratory model is. given and discussed. The mathematical simulation of step by step images recording and reconstruction is given. The pictures of calculated reconstructed images are presented. The prospects, application areas, shortcomings and main problems are discussed.

  • PDF