• Title/Summary/Keyword: visual fusion analysis

Search Result 77, Processing Time 0.031 seconds

An Analysis on the Range of Singular Fusion of Augmented Reality Devices

  • Lee, Hanul;Park, Minyoung;Lee, Hyeontaek;Choi, Hee-Jin
    • Current Optics and Photonics
    • /
    • v.4 no.6
    • /
    • pp.540-544
    • /
    • 2020
  • Current two-dimensional (2D) augmented reality (AR) devices present virtual image and information to a fixed focal plane, regardless of the various locations of ambient objects of interest around the observer. This limitation can lead to a visual discomfort caused by misalignments between the view of the ambient object of interest and the visual representation on the AR device due to a failing of the singular fusion. Since the misalignment becomes more severe as the depth difference gets greater, it can hamper visual understanding of the scene, interfering with task performance of the viewer. Thus, we analyzed the range of singular fusion (RSF) of AR images within which viewers can perceive the shape of an object presented on two different depth planes without difficulty due to the failure of singular fusion. It is expected that our analysis can inspire the development of advanced AR systems with low visual discomfort.

Street Fashion Information Analysis System Design Using Data Fusion

  • Park, Hye-Won;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.16 no.4
    • /
    • pp.879-888
    • /
    • 2005
  • Fashion is hard to expect owing to the rapid change in accordance with consumer taste and environment, and has a tendency toward variety and individuality. Especially street fashion of 21st century is not being regarded as one of the subcultures but is playing an important role as a fountainhead of fashion trend. Therefore, Searching and analyzing street fashions helps us to understand the popular fashions of the next season and also it is important in understanding the consumer fashion sense and commercial area. So, we need to understand fashion styles quantitatively and qualitatively by providing visual data and dividing images. There are many kinds of data in street fashion information. The purpose of this study is to design and implementation for street fashion information analysis system using data fusion. We can show visual information of customer's viewpoint because the system can analyze the fused data for image data and survey data.

  • PDF

Multi-Focus Image Fusion Using Transformation Techniques: A Comparative Analysis

  • Ali Alferaidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.39-47
    • /
    • 2023
  • This study compares various transformation techniques for multifocus image fusion. Multi-focus image fusion is a procedure of merging multiple images captured at unalike focus distances to produce a single composite image with improved sharpness and clarity. In this research, the purpose is to compare different popular frequency domain approaches for multi-focus image fusion, such as Discrete Wavelet Transforms (DWT), Stationary Wavelet Transforms (SWT), DCT-based Laplacian Pyramid (DCT-LP), Discrete Cosine Harmonic Wavelet Transform (DC-HWT), and Dual-Tree Complex Wavelet Transform (DT-CWT). The objective is to increase the understanding of these transformation techniques and how they can be utilized in conjunction with one another. The analysis will evaluate the 10 most crucial parameters and highlight the unique features of each method. The results will help determine which transformation technique is the best for multi-focus image fusion applications. Based on the visual and statistical analysis, it is suggested that the DCT-LP is the most appropriate technique, but the results also provide valuable insights into choosing the right approach.

Image Fusion Methods for Multispectral and Panchromatic Images of Pleiades and KOMPSAT 3 Satellites

  • Kim, Yeji;Choi, Jaewan;Kim, Yongil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.413-422
    • /
    • 2018
  • Many applications using satellite data from high-resolution multispectral sensors require an image fusion step, known as pansharpening, before processing and analyzing the multispectral images when spatial fidelity is crucial. Image fusion methods are to improve images with higher spatial and spectral resolutions by reducing spectral distortion, which occurs on image fusion processing. The image fusion methods can be classified into MRA (Multi-Resolution Analysis) and CSA (Component Substitution Analysis) approaches. To suggest the efficient image fusion method for Pleiades and KOMPSAT (Korea Multi-Purpose Satellite) 3 satellites, this study will evaluate image fusion methods for multispectral and panchromatic images. HPF (High-Pass Filtering), SFIM (Smoothing Filter-based Intensity Modulation), GS (Gram Schmidt), and GSA (Adoptive GS) were selected for MRA and CSA based image fusion methods and applied on multispectral and panchromatic images. Their performances were evaluated using visual and quality index analysis. HPF and SFIM fusion results presented low performance of spatial details. GS and GSA fusion results had enhanced spatial information closer to panchromatic images, but GS produced more spectral distortions on urban structures. This study presented that GSA was effective to improve spatial resolution of multispectral images from Pleiades 1A and KOMPSAT 3.

Incomplete Cholesky Decomposition based Kernel Cross Modal Factor Analysis for Audiovisual Continuous Dimensional Emotion Recognition

  • Li, Xia;Lu, Guanming;Yan, Jingjie;Li, Haibo;Zhang, Zhengyan;Sun, Ning;Xie, Shipeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.810-831
    • /
    • 2019
  • Recently, continuous dimensional emotion recognition from audiovisual clues has attracted increasing attention in both theory and in practice. The large amount of data involved in the recognition processing decreases the efficiency of most bimodal information fusion algorithms. A novel algorithm, namely the incomplete Cholesky decomposition based kernel cross factor analysis (ICDKCFA), is presented and employed for continuous dimensional audiovisual emotion recognition, in this paper. After the ICDKCFA feature transformation, two basic fusion strategies, namely feature-level fusion and decision-level fusion, are explored to combine the transformed visual and audio features for emotion recognition. Finally, extensive experiments are conducted to evaluate the ICDKCFA approach on the AVEC 2016 Multimodal Affect Recognition Sub-Challenge dataset. The experimental results show that the ICDKCFA method has a higher speed than the original kernel cross factor analysis with the comparable performance. Moreover, the ICDKCFA method achieves a better performance than other common information fusion methods, such as the Canonical correlation analysis, kernel canonical correlation analysis and cross-modal factor analysis based fusion methods.

Comparison of Image Fusion Methods to Merge KOMPSAT-2 Panchromatic and Multispectral Images (KOMPSAT-2 전정색영상과 다중분광영상의 융합기법 비교평가)

  • Oh, Kwan-Young;Jung, Hyung-Sup;Lee, Kwang-Jae
    • Korean Journal of Remote Sensing
    • /
    • v.28 no.1
    • /
    • pp.39-54
    • /
    • 2012
  • The objective of this study is to propose efficient data fusion techniques feasible to the KOMPSAT-2 satellite images. The most widely used image fusion techniques, which are the high-pass filter (HPF), the intensity-hue-saturation-based (modified IHS), the pan-sharpened, and the wavelet-based methods, was applied to four KOMPSAT - 2 satellite images having different regional and seasonal characteristics. Each fusion result was compared and analyzed in spatial and spectral features, respectively. Quality evaluation of image fusion techniques was performed in both quantitative and visual analysis. The quantitative analysis methods used for this study were the relative global dimensional error (spatial and spectral ERGAS), the spectral angle mapper index (SAM), and the image quality index (Q4). The results of quantitative and visual analysis indicate that the pan-sharpened method among the fusion methods used for this study relatively has the suitable balance between spectral and spatial information. In the case of the modified IHS method, the spatial information is well preserved, while the spectral information is distorted. And also the HPF and wavelet methods do not preserve the spectral information but the spatial information.

Street Fashion Information Analysis System Design Using Data Fusion

  • Park, Hee-Chang;Park, Hye-Won
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2005.10a
    • /
    • pp.35-45
    • /
    • 2005
  • Data fusion is method to combination data. The purpose of this study is to design and implementation for street fashion information analysis system using data fusion. It can offer variety and actually information because it can fuse image data and survey data for street fashion. Data fusion method exists exact matching method, judgemental matching method, probability matching method, statistical matching method, data linking method, etc. In this study, we use exact matching method. Our system can be visual information analysis of customer's viewpoint because it can analyze both each data and fused data for image data and survey data.

  • PDF

Development and Implementation of Multi-source Remote Sensing Imagery Fusion Based on PCI Geomatica

  • Yu, ZENG;Jixian, ZHANG;Qin, YAN;Pinglin, QIAO
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1334-1336
    • /
    • 2003
  • On the basis of comprehensive analysis and summarization of the image fusion algorithms provided by PCI Geomatica software, deficiencies in image fusion processing functions of this software are put forwarded in this paper. This limitation could be improved by further developing PCI Geomatica on the user’ side. Five effective algorithms could be added into PCI Geomatica. In this paper, the detailed description of how to customize and further develop PCI Geomatica by using Microsoft Visual C++ 6.0, PCI SDK Kit and GDB technique is also given. Through this way, the remote sensing imagery fusion functions of PCI Geomatica software can be extended.

  • PDF

Posture Stabilization Control for Mobile Robot using Marker Recognition and Hybrid Visual Servoing (마커인식과 혼합 비주얼 서보잉 기법을 통한 이동로봇의 자세 안정화 제어)

  • Lee, Sung-Goo;Kwon, Ji-Wook;Hong, Suk-Kyo;Chwa, Dong-Kyoung
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.60 no.8
    • /
    • pp.1577-1585
    • /
    • 2011
  • This paper proposes a posture stabilization control algorithm for a wheeled mobile robot using hybrid visual servo control method with a position based and an image based visual servoing (PBVS and IBVS). To overcome chattering phenomena which were shown in the previous researches using a simple switching function based on a threshold, the proposed hybrid visual servo control law introduces the fusion function based on a blending function. Then, the chattering problem and rapid motion of the mobile robot can be eliminated. Also, we consider the nonlinearity of the wheeled mobile robot unlike the previous visual servo control laws using linear control methods to improve the performances of the visual servo control law. The proposed posture stabilization control law using hybrid visual servoing is verified by a theoretical analysis and simulation and experimental results.

Convergent evaluation of Visual function and Stereoacuity function after Surgery for Intermittent exotropia (간헐성 외사시 수술 후 시각 기능과 입체시 기능에 대한 융복합적 평가)

  • Cho, Hyung-Chel;Ro, Hyo-Lyun;Lee, Heejae
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.4
    • /
    • pp.147-154
    • /
    • 2022
  • This paper evaluated visual function and stereoacuity function after surgery for intermittent exotropia. Subjects of this study were 18 patients (male: n = 10, female: n = 8) mean aged 12.06±5.43 years diagnosed with intermittent exotropia who underwent strabismus surgery. Of these subjects, 72.2% of the subjects underwent strabismus surgery once and 27.8% had it twice. Visual function and stereoacuity function were tested for these subjects. For data analysis, frequency analysis, cross analysis, and correlation analysis were used, and statistical significance was set at p<.05. Regarding the deviation state after strabismus surgery, exodeviation accounted for the most(72.2%), followed by diplopia(50.0%) and suppression(33.3%) for distance sensory fusion. Regarding near sensory fusion, fusion(50.0%) accounted for the most, followed by diplopia(44.4%). After strabismus surgery, subjects with distance stereoacuity blindness were the most at 61.1% and there were no subjects with a normal range of 40-60 arcsec. Near stereoacuity blindness subjects accounted for 33.3% and subjects with 40-60 arcsec accounted for 1.1%. Even after surgery for intermittent exotropia, there were some areas that did not improve in deviation state, stereoacuity, or sensory fusion. Therefore, it is necessary to manage and control strabismus through non-surgical methods before and after surgery for intermittent exotropia.