• 제목/요약/키워드: multi-focus image fusion

검색결과 14건 처리시간 0.022초

DCNN Optimization Using Multi-Resolution Image Fusion

  • Alshehri, Abdullah A.;Lutz, Adam;Ezekiel, Soundararajan;Pearlstein, Larry;Conlen, John
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권11호
    • /
    • pp.4290-4309
    • /
    • 2020
  • In recent years, advancements in machine learning capabilities have allowed it to see widespread adoption for tasks such as object detection, image classification, and anomaly detection. However, despite their promise, a limitation lies in the fact that a network's performance quality is based on the data which it receives. A well-trained network will still have poor performance if the subsequent data supplied to it contains artifacts, out of focus regions, or other visual distortions. Under normal circumstances, images of the same scene captured from differing points of focus, angles, or modalities must be separately analysed by the network, despite possibly containing overlapping information such as in the case of images of the same scene captured from different angles, or irrelevant information such as images captured from infrared sensors which can capture thermal information well but not topographical details. This factor can potentially add significantly to the computational time and resources required to utilize the network without providing any additional benefit. In this study, we plan to explore using image fusion techniques to assemble multiple images of the same scene into a single image that retains the most salient key features of the individual source images while discarding overlapping or irrelevant data that does not provide any benefit to the network. Utilizing this image fusion step before inputting a dataset into the network, the number of images would be significantly reduced with the potential to improve the classification performance accuracy by enhancing images while discarding irrelevant and overlapping regions.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • 제17권3호
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Volume-sharing Multi-aperture Imaging (VMAI): A Potential Approach for Volume Reduction for Space-borne Imagers

  • Jun Ho Lee;Seok Gi Han;Do Hee Kim;Seokyoung Ju;Tae Kyung Lee;Chang Hoon Song;Myoungjoo Kang;Seonghui Kim;Seohyun Seong
    • Current Optics and Photonics
    • /
    • 제7권5호
    • /
    • pp.545-556
    • /
    • 2023
  • This paper introduces volume-sharing multi-aperture imaging (VMAI), a potential approach proposed for volume reduction in space-borne imagers, with the aim of achieving high-resolution ground spatial imagery using deep learning methods, with reduced volume compared to conventional approaches. As an intermediate step in the VMAI payload development, we present a phase-1 design targeting a 1-meter ground sampling distance (GSD) at 500 km altitude. Although its optical imaging capability does not surpass conventional approaches, it remains attractive for specific applications on small satellite platforms, particularly surveillance missions. The design integrates one wide-field and three narrow-field cameras with volume sharing and no optical interference. Capturing independent images from the four cameras, the payload emulates a large circular aperture to address diffraction and synthesizes high-resolution images using deep learning. Computational simulations validated the VMAI approach, while addressing challenges like lower signal-to-noise (SNR) values resulting from aperture segmentation. Future work will focus on further reducing the volume and refining SNR management.

GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지 (Red Tide Detection through Image Fusion of GOCI and Landsat OLI)

  • 신지선;김근용;민지은;유주형
    • 대한원격탐사학회지
    • /
    • 제34권2_2호
    • /
    • pp.377-391
    • /
    • 2018
  • 광역범위에 대한 적조의 효율적인 모니터링을 위하여 원격탐사의 필요성이 점차 증가하고 있다. 하지만 기존 연구에서는 다양한 센서 중 해색 센서만을 위한 적조 탐지 알고리즘 개발에만 집중되어 있는 실정이다. 본 연구에서는 위성 기반 적조 모니터링의 한계로 지적되고 있는 탁도가 높은 연안역의 적조 탐지와 원격탐사 자료의 부정확성을 개선하고자 다중센서의 활용을 제시하고자 한다. 국립수산과학원 적조속보 정보를 바탕으로 적조 발생해역을 선정하였고, 해색 센서인 GOCI 영상과 육상 센서인 Landsat OLI 영상을 이용하여 공간적인 융합과 분광기반 융합을 시도하였다. 두 영상의 공간 융합을 통하여, GOCI 영상에서 관측 불가능하였던 연안지역의 적조와 Landsat OLI 영상의 품질이 낮았던 외해역의 적조 모두 개선된 탐지결과 획득 가능하였다. Feature-level과 rawdata-level로 나누어 진행된 분광 융합 결과, 두 방법에서 도출된 적조 분포 양상은 큰 차이를 보이지 않았다. 하지만 feature-level 방법에서는 영상의 공간해상도가 낮을수록 적조 면적이 과대추정되는 경향이 나타났다. Linear spectral unmixing 방법으로 픽셀을 세분화한 결과, 적조 비율이 낮은 픽셀의 수가 많을수록 적조 면적의 차이는 심화되는 것으로 나타났다. Rawdata-level의 경우Gram-Schmidt가 PC spectral sharpening 기법보다 다소 넓은 면적이 추정되었지만, 큰 차이는 나타나지 않았다. 본 연구에서는 해색 센서와 육상 센서의 공간 융합을 통해 외해뿐만 아니라 탁도가 높은 연안의 적조 역시 탐지가 가능함을 보여주었고, 다양한 분광 융합 방법을 제시함으로써 더욱 정확한 적조 면적 추정 방법을 제시하였다. 이 결과는 한반도 주변의 적조를 더욱 정확하게 탐지하고, 적조를 효과적으로 제어하기 위한 대응대책 수립을 결정하는데 필요한 정확한 적조 면적 정보를 제공할 수 있을 것으로 기대된다.