• Title/Summary/Keyword: Target Fusion

Search Result 329, Processing Time 0.025 seconds

Feature information fusion using multiple neural networks and target identification application of FLIR image (다중 신경회로망을 이용한 특징정보 융합과 적외선영상에서의 표적식별에의 응용)

  • 선선구;박현욱
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.4
    • /
    • pp.266-274
    • /
    • 2003
  • Distance Fourier descriptors of local target boundary and feature information fusion using multiple MLPs (Multilayer perceptrons) are proposed. They are used to identify nonoccluded and partially occluded targets in natural FLIR (forward-looking infrared) images. After segmenting a target, radial Fourier descriptors as global shape features are defined from the target boundary. A target boundary is partitioned into four local boundaries to extract local shape features. In a local boundary, a distance function is defined from boundary points and a line between two extreme points. Distance Fourier descriptors as local shape features are defined by using distance function. One global feature vector and four local feature vectors are used as input data for multiple MLPs to determine final identification result of the target. In the experiments, we show that the proposed method is superior to the traditional feature sets with respect to the identification performance.

Radar Target Recognition Using a Fusion of Monostatic/Bistatic ISAR Images (모노스태틱/바이스태틱 ISAR 영상 융합을 통한 표적식별 연구)

  • Cha, Sang-Bin;Yoon, Se-Won;Hwang, Seok-Hyun;Kim, Min;Jung, Joo-Ho;Lim, Jin-Hwan;Park, Sang-Hong
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.12
    • /
    • pp.93-100
    • /
    • 2018
  • Inverse Synthetic Aperture Radar(ISAR) image is 2-dimensional radar cross section distributions of a target. For target approaching along radar's line of sight(LOS), the bistatic ISAR can compensate for the weakness of the monostatic ISAR which can not obtain the vertical resolution of the image. However, bistatic ISAR have longer processing times and variability in scattering mechanisms than monostatic ISAR, so target identification using only bistatic ISAR images can be inefficient. Therefore, this paper analyzes target identification performance using monostatic and bistatic ISAR images of targets approaching along radar's LOS and proposes a method of target identification through fusion of two radars. Simulation results demonstrate that identification performance through fusion is more efficient than identification performance using only monostatic, bistatic ISAR images.

A Study on Effective Identification of Targets Flying in Formation ISAR Images (ISAR 영상을 이용한 효과적인 편대비행 표적식별 연구)

  • Cha, Sang-Bin;Choi, In-Oh;Jung, Joo-Ho;Park, Sang-Hong
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.17 no.1
    • /
    • pp.67-76
    • /
    • 2022
  • Monostatic/Bistatic inverse synthetic aperture radar (ISAR) images are two-dimensional radar cross section (RCS) distributions of a target. When there are many targets in a single radar beam, ISAR images are generated with targets overlapped, so it is difficult to perform the targets identification using the trained database. In addition, it is inefficient to perform target identification using only single monostatic and bistatic ISAR images separately because each method has its own advantages and weaknesses. Therefore, this paper analyzes multiple targets identification performances using monostatic/bistatic ISAR images and proposes a method of identification through fusion of two ISAR images. To identify multiple targets, we use image combination technique using trained single target images. Simulation results show effectiveness of proposed method.

Experimental Verification of Multi-Sensor Geolocation Algorithm using Sequential Kalman Filter (순차적 칼만 필터를 적용한 다중센서 위치추정 알고리즘 실험적 검증)

  • Lee, Seongheon;Kim, Youngjoo;Bang, Hyochoong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.7-13
    • /
    • 2015
  • Unmanned air vehicles (UAVs) are getting popular not only as a private usage for the aerial photograph but military usage for the surveillance, reconnaissance and supply missions. For an UAV to successfully achieve these kind of missions, geolocation (localization) must be implied to track an interested target or fly by reference. In this research, we adopted multi-sensor fusion (MSF) algorithm to increase the accuracy of the geolocation and verified the algorithm using two multicopter UAVs. One UAV is equipped with an optical camera, and another UAV is equipped with an optical camera and a laser range finder. Throughout the experiment, we have obtained measurements about a fixed ground target and estimated the target position by a series of coordinate transformations and sequential Kalman filter. The result showed that the MSF has better performance in estimating target location than the case of using single sensor. Moreover, the experimental result implied that multi-sensor geolocation algorithm is able to have further improvements in localization accuracy and feasibility of other complicated applications such as moving target tracking and multiple target tracking.

Multimodal Attention-Based Fusion Model for Context-Aware Emotion Recognition

  • Vo, Minh-Cong;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • v.18 no.3
    • /
    • pp.11-20
    • /
    • 2022
  • Human Emotion Recognition is an exciting topic that has been attracting many researchers for a lengthy time. In recent years, there has been an increasing interest in exploiting contextual information on emotion recognition. Some previous explorations in psychology show that emotional perception is impacted by facial expressions, as well as contextual information from the scene, such as human activities, interactions, and body poses. Those explorations initialize a trend in computer vision in exploring the critical role of contexts, by considering them as modalities to infer predicted emotion along with facial expressions. However, the contextual information has not been fully exploited. The scene emotion created by the surrounding environment, can shape how people perceive emotion. Besides, additive fusion in multimodal training fashion is not practical, because the contributions of each modality are not equal to the final prediction. The purpose of this paper was to contribute to this growing area of research, by exploring the effectiveness of the emotional scene gist in the input image, to infer the emotional state of the primary target. The emotional scene gist includes emotion, emotional feelings, and actions or events that directly trigger emotional reactions in the input image. We also present an attention-based fusion network, to combine multimodal features based on their impacts on the target emotional state. We demonstrate the effectiveness of the method, through a significant improvement on the EMOTIC dataset.

Development of a low energy ion irradiation system for erosion test of first mirror in fusion devices

  • Kihyun Lee;YoungHwa An;Bongki Jung;Boseong Kim;Yoo kwan Kim
    • Nuclear Engineering and Technology
    • /
    • v.56 no.1
    • /
    • pp.70-77
    • /
    • 2024
  • A low energy ion irradiation system based on the deuterium arc ion source with a high perveance of 1 µP for a single extraction aperture has been successfully developed for the investigation of ion irradiation on plasma-facing components including the first mirror of plasma optical diagnostics system. Under the optimum operating condition for mirror testing, the ion source has a beam energy of 200 eV and a current density of 3.7 mA/cm2. The ion source comprises a magnetic cusp-type plasma source, an extraction system, a target system with a Faraday cup, and a power supply control system to ensure stable long time operation. Operation parameters of plasma source such as pressure, filament current, and arc power with D2 discharge gas were optimized for beam extraction by measuring plasma parameters with a Langmuir probe. The diode electrode extraction system was designed by IGUN simulation to optimize for 1 µP perveance. It was successfully demonstrated that the ion beam current of ~4 mA can be extracted through the 10 mm aperture from the developed ion source. The target system with the Faraday cup is also developed to measure the beam current. With the assistance of the power control system, ion beams are extracted while maintaining a consistent arc power for more than 10 min of continuous operation.

Experimental Research on Radar and ESM Measurement Fusion Technique Using Probabilistic Data Association for Cooperative Target Tracking (협동 표적 추적을 위한 확률적 데이터 연관 기반 레이더 및 ESM 센서 측정치 융합 기법의 실험적 연구)

  • Lee, Sae-Woom;Kim, Eun-Chan;Jung, Hyo-Young;Kim, Gi-Sung;Kim, Ki-Seon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.5C
    • /
    • pp.355-364
    • /
    • 2012
  • Target processing mechanisms are necessary to collect target information, real-time data fusion, and tactical environment recognition for cooperative engagement ability. Among these mechanisms, the target tracking starts from predicting state of speed, acceleration, and location by using sensors' measurements. However, it can be a problem to give the reliability because the measurements have a certain uncertainty. Thus, a technique which uses multiple sensors is needed to detect the target and increase the reliability. Also, data fusion technique is necessary to process the data which is provided from heterogeneous sensors for target tracking. In this paper, a target tracking algorithm is proposed based on probabilistic data association(PDA) by fusing radar and ESM sensor measurements. The radar sensor's azimuth and range measurements and the ESM sensor's bearing-only measurement are associated by the measurement fusion method. After gating associated measurements, state estimation of the target is performed by PDA filter. The simulation results show that the proposed algorithm provides improved estimation under linear and circular target motions.

Mutiple Target Angle Tracking Algorithm Based on measurement Fusion (측정치 융합에 기반을 둔 다중표적 방위각 추적 알고리즘)

  • Ryu, Chang-Soo
    • 전자공학회논문지 IE
    • /
    • v.43 no.3
    • /
    • pp.13-21
    • /
    • 2006
  • Ryu et al. proposed a multiple target angle tracking algorithm using the angular measurement obtained from the signal subspace estimated by the output of sensor array. Ryu's algorithm has good features that it has no data association problem and simple structure. But its performance is seriously degraded in the low signal-to-noise ratio, and it uses the angular measurement obtained from the signal subspace of sampling time, even though the signal subspace is continuously updated by the output of sensor array. For improving the tracking performance of Ryu's algorithm, a measurement fusion method is derived based on ML(Maximum Likelihood) in this paper, and it admits us to use the angular measurements obtained form the adjacent signal subspaces as well as the signal subspace of sampling time. The new target angle tracking algorithm is proposed using the derived measurement fusion method. The proposed algorithm has a better tracking performance than that of Ryu's algorithm and it sustains the good features of Ryu's algorithm.

Decision Level Fusion of Multifrequency Polarimetric SAR Data Using Target Decomposition based Features and a Probabilistic Ratio Model (타겟 분해 기반 특징과 확률비 모델을 이용한 다중 주파수 편광 SAR 자료의 결정 수준 융합)

  • Chi, Kwang-Hoon;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.2
    • /
    • pp.89-101
    • /
    • 2007
  • This paper investigates the effects of the fusion of multifrequency (C and L bands) polarimetric SAR data in land-cover classification. NASA JPL AIRSAR C and L bands data were used to supervised classification in an agricultural area to simulate the integration of ALOS PALSAR and Radarsat-2 SAR data to be available. Several scattering features derived from target decomposition based on eigen value/vector analysis were used as input for a support vector machines classifier and then the posteriori probabilities for each frequency SAR data were integrated by applying a probabilistic ratio model as a decision level fusion methodology. From the case study results, L band data had the proper amount of penetration power and showed better classification accuracy improvement (about 22%) over C band data which did not have enough penetration. When all frequency data were fused for the classification, a significant improvement of about 10% in overall classification accuracy was achieved thanks to an increase of discrimination capability for each class, compared with the case of L band Shh data.

Ground Target Classification Algorithm based on Multi-Sensor Images (다중센서 영상 기반의 지상 표적 분류 알고리즘)

  • Lee, Eun-Young;Gu, Eun-Hye;Lee, Hee-Yul;Cho, Woong-Ho;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.2
    • /
    • pp.195-203
    • /
    • 2012
  • This paper proposes ground target classification algorithm based on decision fusion and feature extraction method using multi-sensor images. The decisions obtained from the individual classifiers are fused by applying a weighted voting method to improve target recognition rate. For classifying the targets belong to the individual sensors images, features robust to scale and rotation are extracted using the difference of brightness of CM images obtained from CCD image and the boundary similarity and the width ratio between the vehicle body and turret of target in FLIR image. Finally, we verity the performance of proposed ground target classification algorithm and feature extraction method by the experimentation.