• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.035 seconds

A Tracking Algorithm for Autonomous Navigation of AGVs: Federated Information Filter

  • Kim, Yong-Shik;Hong, Keum-Shik
    • Journal of Navigation and Port Research
    • /
    • v.28 no.7
    • /
    • pp.635-640
    • /
    • 2004
  • In this paper, a tracking algorithm for autonomous navigation of automated guided vehicles (AGVs) operating in container terminals is presented. The developed navigation algorithm takes the form of a federated information filter used to detect other AGVs and avoid obstacles using fused information from multiple sensors. Being equivalent to the Kalman filter (KF) algebraically, the information filter is extended to N-sensor distributed dynamic systems. In multi-sensor environments, the information-based filter is easier to decentralize, initialize, and fuse than a KF-based filter. It is proved that the information state and the information matrix of the suggested filter, which are weighted in terms of an information sharing factor, are equal to those of a centralized information filter under the regular conditions. Numerical examples using Monte Carlo simulation are provided to compare the centralized information filter and the proposed one.

Sonar-Based Certainty Grids for Autonomous Mobile Robots (초음파 센서을 이용한 자율 이동 로봇의 써튼티 그리드 형성)

  • 임종환;조동우
    • The Transactions of the Korean Institute of Electrical Engineers
    • /
    • v.39 no.4
    • /
    • pp.386-392
    • /
    • 1990
  • This paper discribes a sonar-based certainty grid, the probabilistic representation of the uncertain and incomplete sensor knowledge, for autonomous mobile robot navigation. We use sonar sensor range data to build a map of the robot's surroundings. This range data provides information about the location of the objects which may exist in front of the sensor. From this information, we can compute the probability of being occupied and that of being empty for each cell. In this paper, a new method using Bayesian formula is introduced, which enables us to overcome some difficulties of the Ad-Hoc formula that has been the only way of updating the grids. This new formula can be applied to other kinds of sensors as well as sonar sensor. The validity of this formula in the real world is verified through simulation and experiment. This paper also shows that a wide angle sensor such as sonar sensor can be used effectively to identify the empty area, and the simultaneous use of multiple sensors and fusion in a certainty grid can improve the quality of the map.

  • PDF

Efficient Processing Scheme for Correlated Data in Ubiquitous Sensor Networks (유비쿼터스 센서 네트워크에서 연관된 데이터의 효율적인 처리방안)

  • Ryu, Jea-Tek;Heo, Nam-Ho;Yoo, Seung-Wha;Kim, Ki-Hyung
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2008.08a
    • /
    • pp.63-68
    • /
    • 2008
  • In now days, Ubiquitous technology grow up, so the variety service are developed. Sensor networks purpose is collection information about environment and geographic. But sensor network has limit in power, cost and so on. There is much restriction. Some sensor networks purpose is monitoring environment. And there is some relation in sensing data. Sensor nodes sense information by periods. First sensing data correlate with next sensing data. At this point, this paper suggest power saving method. Some data are same, the other data are similar.

  • PDF

The Design and Implementation for Efficient C2A (효율적인 방공 지휘통제경보체계를 위한 설계 및 구현)

  • Kwon, Cheol-Hee;Hong, Dong-Ho;Lee, Dong-Yun;Lee, Jong-Soon;Kim, Young-Vin
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.6
    • /
    • pp.733-738
    • /
    • 2009
  • In this paper, we have proposed the design and implementation for efficient Command Control and Alert(C2A). Information fusion must be done for knowing the state and identification of targets using multi-sensor. The threat priority of targets which are processed and identified by information fusion is calculated by air-defence operation logic. The threat targets are assigned to the valid and effective weapons by nearest neighborhood algorithm. Furthermore, the assignment result allows operators to effectively operate C2A by providing the operators with visualizing symbol color and the assignment pairing color line. We introduce the prototype which is implemented by the proposed design and algorithm.

Multi Sources Track Management Method for Naval Combat Systems (다중 센서 및 다중 전술데이터링크 환경 하에서의 표적정보 처리 기법)

  • Lee, Ho Chul;Kim, Tae Su;Shin, Hyung Jo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.2
    • /
    • pp.126-131
    • /
    • 2014
  • This paper is concerned with a track management method for a naval combat system which receives the tracks information from multi-sensors and multi-tactical datalinks. Since the track management of processing the track information from diverse sources can be formulated as a data fusion problem, this paper will deal with the data fusion architecture, track association and track information determination algorithm for the track management of naval combat systems.

Detection The Behavior of Smartphone Users using Time-division Feature Fusion Convolutional Neural Network (시분할 특징 융합 합성곱 신경망을 이용한 스마트폰 사용자의 행동 검출)

  • Shin, Hyun-Jun;Kwak, Nae-Jung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1224-1230
    • /
    • 2020
  • Since the spread of smart phones, interest in wearable devices has increased and diversified, and is closely related to the lives of users, and has been used as a method for providing personalized services. In this paper, we propose a method to detect the user's behavior by applying information from a 3-axis acceleration sensor and a 3-axis gyro sensor embedded in a smartphone to a convolutional neural network. Human behavior differs according to the size and range of motion, starting and ending time, including the duration of the signal data constituting the motion. Therefore, there is a performance problem for accuracy when applied to a convolutional neural network as it is. Therefore, we proposed a Time-Division Feature Fusion Convolutional Neural Network (TDFFCNN) that learns the characteristics of the sensor data segmented over time. The proposed method outperformed other classifiers such as SVM, IBk, convolutional neural network, and long-term memory circulatory neural network.

A Fusion Sensor System for Efficient Road Surface Monitorinq on UGV (UGV에서 효율적인 노면 모니터링을 위한 퓨전 센서 시스템 )

  • Seonghwan Ryu;Seoyeon Kim;Jiwoo Shin;Taesik Kim;Jinman Jung
    • Smart Media Journal
    • /
    • v.13 no.3
    • /
    • pp.18-26
    • /
    • 2024
  • Road surface monitoring is essential for maintaining road environment safety through managing risk factors like rutting and crack detection. Using autonomous driving-based UGVs with high-performance 2D laser sensors enables more precise measurements. However, the increased energy consumption of these sensors is limited by constrained battery capacity. In this paper, we propose a fusion sensor system for efficient surface monitoring with UGVs. The proposed system combines color information from cameras and depth information from line laser sensors to accurately detect surface displacement. Furthermore, a dynamic sampling algorithm is applied to control the scanning frequency of line laser sensors based on the detection status of monitoring targets using camera sensors, reducing unnecessary energy consumption. A power consumption model of the fusion sensor system analyzes its energy efficiency considering various crack distributions and sensor characteristics in different mission environments. Performance analysis demonstrates that setting the power consumption of the line laser sensor to twice that of the saving state when in the active state increases power consumption efficiency by 13.3% compared to fixed sampling under the condition of λ=10, µ=10.

Evaluation of Spatio-temporal Fusion Models of Multi-sensor High-resolution Satellite Images for Crop Monitoring: An Experiment on the Fusion of Sentinel-2 and RapidEye Images (작물 모니터링을 위한 다중 센서 고해상도 위성영상의 시공간 융합 모델의 평가: Sentinel-2 및 RapidEye 영상 융합 실험)

  • Park, Soyeon;Kim, Yeseul;Na, Sang-Il;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.807-821
    • /
    • 2020
  • The objective of this study is to evaluate the applicability of representative spatio-temporal fusion models developed for the fusion of mid- and low-resolution satellite images in order to construct a set of time-series high-resolution images for crop monitoring. Particularly, the effects of the characteristics of input image pairs on the prediction performance are investigated by considering the principle of spatio-temporal fusion. An experiment on the fusion of multi-temporal Sentinel-2 and RapidEye images in agricultural fields was conducted to evaluate the prediction performance. Three representative fusion models, including Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM), SParse-representation-based SpatioTemporal reflectance Fusion Model (SPSTFM), and Flexible Spatiotemporal DAta Fusion (FSDAF), were applied to this comparative experiment. The three spatio-temporal fusion models exhibited different prediction performance in terms of prediction errors and spatial similarity. However, regardless of the model types, the correlation between coarse resolution images acquired on the pair dates and the prediction date was more significant than the difference between the pair dates and the prediction date to improve the prediction performance. In addition, using vegetation index as input for spatio-temporal fusion showed better prediction performance by alleviating error propagation problems, compared with using fused reflectance values in the calculation of vegetation index. These experimental results can be used as basic information for both the selection of optimal image pairs and input types, and the development of an advanced model in spatio-temporal fusion for crop monitoring.

On-line Measurement and Characterization of Nano-web Qualities Using a Stochastic Sensor Fusion System Design and Implementation of NAFIS(NAno-Fiber Information System)

  • Kim, Joovong;Lim, Dae-Young;Byun, Sung-Weon
    • Proceedings of the Korean Fiber Society Conference
    • /
    • 2003.10a
    • /
    • pp.45-46
    • /
    • 2003
  • A process control system has been developed for measurement and characterization of the nanofiber web qualities. The nano-fiber information system (NAFIS) developed consists of a measurement device and an analysis algorithm, which are a microscope-laser sensor fusion system and a process information system, respectively. It has been found that NAFIS is so successful in detecting irregularities of pore and diameter that the resulting product has been quitely under control even at the high production rate. Pore distribution, fiber diameter and mass uniformity have been readily measured and analyzed by integrating the non-contact measurement technology and the random function-based time domain signal/image processing algorithm. Qualifies of the nano-fiber webs have been revealed in a way that the statistical parameters for the characteristics above are calculated and stored in a certain interval along with the time-specific information. Quality matrix, scale of homogeneity is easily obtained through the easy-to-use GUI information. Finally, ANFIS has been evaluated both for the real-time measurement and analysis, and for the process monitoring.

  • PDF

Hierarchical Clustering Approach of Multisensor Data Fusion: Application of SAR and SPOT-7 Data on Korean Peninsula

  • Lee, Sang-Hoon;Hong, Hyun-Gi
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.65-65
    • /
    • 2002
  • In remote sensing, images are acquired over the same area by sensors of different spectral ranges (from the visible to the microwave) and/or with different number, position, and width of spectral bands. These images are generally partially redundant, as they represent the same scene, and partially complementary. For many applications of image classification, the information provided by a single sensor is often incomplete or imprecise resulting in misclassification. Fusion with redundant data can draw more consistent inferences for the interpretation of the scene, and can then improve classification accuracy. The common approach to the classification of multisensor data as a data fusion scheme at pixel level is to concatenate the data into one vector as if they were measurements from a single sensor. The multiband data acquired by a single multispectral sensor or by two or more different sensors are not completely independent, and a certain degree of informative overlap may exist between the observation spaces of the different bands. This dependence may make the data less informative and should be properly modeled in the analysis so that its effect can be eliminated. For modeling and eliminating the effect of such dependence, this study employs a strategy using self and conditional information variation measures. The self information variation reflects the self certainty of the individual bands, while the conditional information variation reflects the degree of dependence of the different bands. One data set might be very less reliable than others in the analysis and even exacerbate the classification results. The unreliable data set should be excluded in the analysis. To account for this, the self information variation is utilized to measure the degrees of reliability. The team of positively dependent bands can gather more information jointly than the team of independent ones. But, when bands are negatively dependent, the combined analysis of these bands may give worse information. Using the conditional information variation measure, the multiband data are split into two or more subsets according the dependence between the bands. Each subsets are classified separately, and a data fusion scheme at decision level is applied to integrate the individual classification results. In this study. a two-level algorithm using hierarchical clustering procedure is used for unsupervised image classification. Hierarchical clustering algorithm is based on similarity measures between all pairs of candidates being considered for merging. In the first level, the image is partitioned as any number of regions which are sets of spatially contiguous pixels so that no union of adjacent regions is statistically uniform. The regions resulted from the low level are clustered into a parsimonious number of groups according to their statistical characteristics. The algorithm has been applied to satellite multispectral data and airbone SAR data.

  • PDF