• Title/Summary/Keyword: Bayesian Image Fusion

Search Result 12, Processing Time 0.02 seconds

Evaluation of Geo-based Image Fusion on Mobile Cloud Environment using Histogram Similarity Analysis

  • Lee, Kiwon;Kang, Sanggoo
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2015
  • Mobility and cloud platform have become the dominant paradigm to develop web services dealing with huge and diverse digital contents for scientific solution or engineering application. These two trends are technically combined into mobile cloud computing environment taking beneficial points from each. The intention of this study is to design and implement a mobile cloud application for remotely sensed image fusion for the further practical geo-based mobile services. In this implementation, the system architecture consists of two parts: mobile web client and cloud application server. Mobile web client is for user interface regarding image fusion application processing and image visualization and for mobile web service of data listing and browsing. Cloud application server works on OpenStack, open source cloud platform. In this part, three server instances are generated as web server instance, tiling server instance, and fusion server instance. With metadata browsing of the processing data, image fusion by Bayesian approach is performed using functions within Orfeo Toolbox (OTB), open source remote sensing library. In addition, similarity of fused images with respect to input image set is estimated by histogram distance metrics. This result can be used as the reference criterion for user parameter choice on Bayesian image fusion. It is thought that the implementation strategy for mobile cloud application based on full open sources provides good points for a mobile service supporting specific remote sensing functions, besides image fusion schemes, by user demands to expand remote sensing application fields.

Crack segmentation in high-resolution images using cascaded deep convolutional neural networks and Bayesian data fusion

  • Tang, Wen;Wu, Rih-Teng;Jahanshahi, Mohammad R.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.221-235
    • /
    • 2022
  • Manual inspection of steel box girders on long span bridges is time-consuming and labor-intensive. The quality of inspection relies on the subjective judgements of the inspectors. This study proposes an automated approach to detect and segment cracks in high-resolution images. An end-to-end cascaded framework is proposed to first detect the existence of cracks using a deep convolutional neural network (CNN) and then segment the crack using a modified U-Net encoder-decoder architecture. A Naïve Bayes data fusion scheme is proposed to reduce the false positives and false negatives effectively. To generate the binary crack mask, first, the original images are divided into 448 × 448 overlapping image patches where these image patches are classified as cracks versus non-cracks using a deep CNN. Next, a modified U-Net is trained from scratch using only the crack patches for segmentation. A customized loss function that consists of binary cross entropy loss and the Dice loss is introduced to enhance the segmentation performance. Additionally, a Naïve Bayes fusion strategy is employed to integrate the crack score maps from different overlapping crack patches and to decide whether a pixel is crack or not. Comprehensive experiments have demonstrated that the proposed approach achieves an 81.71% mean intersection over union (mIoU) score across 5 different training/test splits, which is 7.29% higher than the baseline reference implemented with the original U-Net.

A Performance Test of Mobile Cloud Service for Bayesian Image Fusion (베이지안 영상융합을 적용한 모바일 클라우드 성능실험)

  • Kang, Sanggoo;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.445-454
    • /
    • 2014
  • In recent days, trend technologies for cloud, bigdata, or mobile, as the important marketable keywords or paradigm in Information Communication Technology (ICT), are widely used and interrelated each other in the various types of platforms and web-based services. Especially, the combination of cloud and mobile is recognized as one of a profitable business models, holding benefits of their own. Despite these challenging aspects, there are a few application cases of this model dealing with geo-based data sets or imageries. Among many considering points for geo-based cloud application on mobile, this study focused on a performance test of mobile cloud of Bayesian image fusion algorithm with satellite images. Two kinds of cloud platform of Amazon and OpenStack were built for performance test by CPU time stamp. In fact, the scheme for performance test of mobile cloud is not established yet, so experiment conditions applied in this study are to check time stamp. As the result, it is revealed that performance in two platforms is almost same level. It is implied that open source mobile cloud services based on OpenStack are enough to apply further applications dealing with geo-based data sets.

Emotion Recognition and Expression System of User using Multi-Modal Sensor Fusion Algorithm (다중 센서 융합 알고리즘을 이용한 사용자의 감정 인식 및 표현 시스템)

  • Yeom, Hong-Gi;Joo, Jong-Tae;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.1
    • /
    • pp.20-26
    • /
    • 2008
  • As they have more and more intelligence robots or computers these days, so the interaction between intelligence robot(computer) - human is getting more and more important also the emotion recognition and expression are indispensable for interaction between intelligence robot(computer) - human. In this paper, firstly we extract emotional features at speech signal and facial image. Secondly we apply both BL(Bayesian Learning) and PCA(Principal Component Analysis), lastly we classify five emotions patterns(normal, happy, anger, surprise and sad) also, we experiment with decision fusion and feature fusion to enhance emotion recognition rate. The decision fusion method experiment on emotion recognition that result values of each recognition system apply Fuzzy membership function and the feature fusion method selects superior features through SFS(Sequential Forward Selection) method and superior features are applied to Neural Networks based on MLP(Multi Layer Perceptron) for classifying five emotions patterns. and recognized result apply to 2D facial shape for express emotion.

Pattern Classification of Multi-Spectral Satellite Images based on Fusion of Fuzzy Algorithms (퍼지 알고리즘의 융합에 의한 다중분광 영상의 패턴분류)

  • Jeon, Young-Joon;Kim, Jin-Il
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.7
    • /
    • pp.674-682
    • /
    • 2005
  • This paper proposes classification of multi-spectral satellite image based on fusion of fuzzy G-K (Gustafson-Kessel) algorithm and PCM algorithm. The suggested algorithm establishes the initial cluster centers by selecting training data from each category, and then executes the fuzzy G-K algorithm. PCM algorithm perform using classification result of the fuzzy G-K algorithm. The classification categories are allocated to the corresponding category when the results of classification by fuzzy G-K algorithm and PCM algorithm belong to the same category. If the classification result of two algorithms belongs to the different category, the pixels are allocated by Bayesian maximum likelihood algorithm. Bayesian maximum likelihood algorithm uses the data from the interior of the average intracluster distance. The information of the pixels within the average intracluster distance has a positive normal distribution. It improves classification result by giving a positive effect in Bayesian maximum likelihood algorithm. The proposed method is applied to IKONOS and Landsat TM remote sensing satellite image for the test. As a result, the overall accuracy showed a better outcome than individual Fuzzy G-K algorithm and PCM algorithm or the conventional maximum likelihood classification algorithm.

Image Segmentation Based on Fusion of Range and Intensity Images (거리영상과 밝기영상의 fusion을 이용한 영상분할)

  • Chang, In-Su;Park, Rae-Hong
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.9
    • /
    • pp.95-103
    • /
    • 1998
  • This paper proposes an image segmentation algorithm based on fusion of range and intensity images. Based on Bayesian theory, a priori knowledge is encoded by the Markov random field (MRF). A maximum a posteriori (MAP) estimator is constructed using the features extracted from range and intensity images. Objects are approximated by local planar surfaces in range images, and the parametric space is constructed with the surface parameters estimated pixelwise. In intensity images the ${\alpha}$-trimmed variance constructs the intensity feature. An image is segmented by optimizing the MAP estimator that is constructed using a likelihood function based on edge information. Computer simulation results shw that the proposed fusion algorithm effectively segments the images independentl of shadow, noise, and light-blurring.

  • PDF

Emotion Recognition Method based on Feature and Decision Fusion using Speech Signal and Facial Image (음성 신호와 얼굴 영상을 이용한 특징 및 결정 융합 기반 감정 인식 방법)

  • Joo, Jong-Tae;Yang, Hyun-Chang;Sim, Kwee-Bo
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.11a
    • /
    • pp.11-14
    • /
    • 2007
  • 인간과 컴퓨터간의 상호교류 하는데 있어서 감정 인식은 필수라 하겠다. 그래서 본 논문에서는 음성 신호 및 얼굴 영상을 BL(Bayesian Learning)과 PCA(Principal Component Analysis)에 적용하여 5가지 감정 (Normal, Happy, Sad, Anger, Surprise) 으로 패턴 분류하였다. 그리고 각각 신호의 단점을 보완하고 인식률을 높이기 위해 결정 융합 방법과 특징 융합 방법을 이용하여 감정융합을 실행하였다. 결정 융합 방법은 각각 인식 시스템을 통해 얻어진 인식 결과 값을 퍼지 소속 함수에 적용하여 감정 융합하였으며, 특정 융합 방법은 SFS(Sequential Forward Selection)특정 선택 방법을 통해 우수한 특정들을 선택한 후 MLP(Multi Layer Perceptron) 기반 신경망(Neural Networks)에 적용하여 감정 융합을 실행하였다.

  • PDF

Moving Object Classification through Fusion of Shape and Motion Information (형상 정보와 모션 정보 융합을 통한 움직이는 물체 인식)

  • Kim Jung-Ho;Ko Han-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.5 s.311
    • /
    • pp.38-47
    • /
    • 2006
  • Conventional classification method uses a single classifier based on shape or motion feature. However this method exhibits a weakness if naively used since the classification performance is highly sensitive to the accuracy of moving region to be detected. The detection accuracy, in turn, depends on the condition of the image background. In this paper, we propose to resolve the drawback and thus strengthen the classification reliability by employing a Bayesian decision fusion and by optimally combining the decisions of three classifiers. The first classifier is based on shape information obtained from Fourier descriptors while the second is based on the shape information obtained from image gradients. The third classifier uses motion information. Our experimental results on the classification Performance of human and vehicle with a static camera in various directions confirm a significant improvement and indicate the superiority of the proposed decision fusion method compared to the conventional Majority Voting and Weight Average Score approaches.

Multi-focus Image Fusion Technique Based on Parzen-windows Estimates (Parzen 윈도우 추정에 기반한 다중 초점 이미지 융합 기법)

  • Atole, Ronnel R.;Park, Daechul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.4
    • /
    • pp.75-88
    • /
    • 2008
  • This paper presents a spatial-level nonparametric multi-focus image fusion technique based on kernel estimates of input image blocks' underlying class-conditional probability density functions. Image fusion is approached as a classification task whose posterior class probabilities, P($wi{\mid}Bikl$), are calculated with likelihood density functions that are estimated from the training patterns. For each of the C input images Ii, the proposed method defines i classes wi and forms the fused image Z(k,l) from a decision map represented by a set of $P{\times}Q$ blocks Bikl whose features maximize the discriminant function based on the Bayesian decision principle. Performance of the proposed technique is evaluated in terms of RMSE and Mutual Information (MI) as the output quality measures. The width of the kernel functions, ${\sigma}$, were made to vary, and different kernels and block sizes were applied in performance evaluation. The proposed scheme is tested with C=2 and C=3 input images and results exhibited good performance.

  • PDF

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.