• Title/Summary/Keyword: Sensor fusion

Search Result 815, Processing Time 0.026 seconds

Improvement of Position Estimation Based on the Multisensor Fusion in Underwater Unmanned Vehicles (다중센서 융합 기반 무인잠수정 위치추정 개선)

  • Lee, Kyung-Soo;Yoon, Hee-Byung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.2
    • /
    • pp.178-185
    • /
    • 2011
  • In this paper, we propose the position estimation algorithm based on the multisensor fusion using equalization of state variables and feedback structure. First, the state variables measured from INS of main sensor with large error and DVL of assistance sensor with small error are measured before prediction phase. Next, the equalized state variables are entered to each filter and fused the enhanced state variables for prediction and update phases. Finally, the fused state variables are returned to the main sensor for improving the position estimation of UUV. For evaluation, we create the moving course of UUV by simulation and confirm the performance of position estimation by applying the proposed algorithm. The evaluation results show that the proposed algorithm is the best for position estimation and also possible for robust position estimation at the change period of moving courses.

High Spatial Resolution Satellite Image Simulation Based on 3D Data and Existing Images

  • La, Phu Hien;Jeon, Min Cheol;Eo, Yang Dam;Nguyen, Quang Minh;Lee, Mi Hee;Pyeon, Mu Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.2
    • /
    • pp.121-132
    • /
    • 2016
  • This study proposes an approach for simulating high spatial resolution satellite images acquired under arbitrary sun-sensor geometry using existing images and 3D (three-dimensional) data. First, satellite images, having significant differences in spectral regions compared with those in the simulated image were transformed to the same spectral regions as those in simulated image by using the UPDM (Universal Pattern Decomposition Method). Simultaneously, shadows cast by buildings or high features under the new sun position were modeled. Then, pixels that changed from shadow into non-shadow areas and vice versa were simulated on the basis of existing images. Finally, buildings that were viewed under the new sensor position were modeled on the basis of open library-based 3D reconstruction program. An experiment was conducted to simulate WV-3 (WorldView-3) images acquired under two different sun-sensor geometries based on a Pleiades 1A image, an additional WV-3 image, a Landsat image, and 3D building models. The results show that the shapes of the buildings were modeled effectively, although some problems were noted in the simulation of pixels changing from shadows cast by buildings into non-shadow. Additionally, the mean reflectance of the simulated image was quite similar to that of actual images in vegetation and water areas. However, significant gaps between the mean reflectance of simulated and actual images in soil and road areas were noted, which could be attributed to differences in the moisture content.

Development of Oriental-Western Fusion Patient Monitor by Using the Clip-type Pulsimeter Equipped with a Hall Sensor, the Electrocardiograph, and the Photoplethysmograph (홀센서 집게형 맥진기와 심전도-용적맥파계를 이용한 한양방 융합용 환자감시장치 개발연구)

  • Lee, Dae-Hui;Hong, Yu-Sik;Lee, Sang-Suk
    • Journal of the Korean Magnetics Society
    • /
    • v.23 no.4
    • /
    • pp.135-143
    • /
    • 2013
  • The clip-type pulsimeter equipped with a Hall sensor has a permanent magnet attached in the "Chwan" position to the center of a radial artery. The clip-type pulsimeter is composed of a hardware system measuring voltage signals. These electrical bio-signals display pulse rate, non-invasive blood pressure, respiratory rate, pulse wave velocity (PWV), and spatial pulse wave velocity (SPWV) simultaneously measured by using the radial artery pulsimeter, the electrocardiograph (ECG), and the photoplethysmograph (PPG). The findings of this research may be useful for developing a oriental-western biomedical signal storage device, that is, the new and fusion patient monitor, for a U-health-care system.

Detection The Behavior of Smartphone Users using Time-division Feature Fusion Convolutional Neural Network (시분할 특징 융합 합성곱 신경망을 이용한 스마트폰 사용자의 행동 검출)

  • Shin, Hyun-Jun;Kwak, Nae-Jung;Song, Teuk-Seob
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.9
    • /
    • pp.1224-1230
    • /
    • 2020
  • Since the spread of smart phones, interest in wearable devices has increased and diversified, and is closely related to the lives of users, and has been used as a method for providing personalized services. In this paper, we propose a method to detect the user's behavior by applying information from a 3-axis acceleration sensor and a 3-axis gyro sensor embedded in a smartphone to a convolutional neural network. Human behavior differs according to the size and range of motion, starting and ending time, including the duration of the signal data constituting the motion. Therefore, there is a performance problem for accuracy when applied to a convolutional neural network as it is. Therefore, we proposed a Time-Division Feature Fusion Convolutional Neural Network (TDFFCNN) that learns the characteristics of the sensor data segmented over time. The proposed method outperformed other classifiers such as SVM, IBk, convolutional neural network, and long-term memory circulatory neural network.

Abnormal Situation Detection Algorithm via Sensors Fusion from One Person Households

  • Kim, Da-Hyeon;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.111-118
    • /
    • 2022
  • In recent years, the number of single-person elderly households has increased, but when an emergency situation occurs inside the house in the case of single-person households, it is difficult to inform the outside world. Various smart home solutions have been proposed to detect emergency situations in single-person households, but it is difficult to use video media such as home CCTV, which has problems in the privacy area. Furthermore, if only a single sensor is used to analyze the abnormal situation of the elderly in the house, accurate situational analysis is limited due to the constraint of data amount. In this paper, therefore, we propose an algorithm of abnormal situation detection fusion inside the house by fusing 2DLiDAR, dust, and voice sensors, which are closely related to everyday life while protecting privacy, based on their correlations. Moreover, this paper proves the algorithm's reliability through data collected in a real-world environment. Adnormal situations that are detectable and undetectable by the proposed algorithm are presented. This study focuses on the detection of adnormal situations in the house and will be helpful in the lives of single-household users.

An Automatic Data Collection System for Human Pose using Edge Devices and Camera-Based Sensor Fusion (엣지 디바이스와 카메라 센서 퓨전을 활용한 사람 자세 데이터 자동 수집 시스템)

  • Young-Geun Kim;Seung-Hyeon Kim;Jung-Kon Kim;Won-Jung Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.189-196
    • /
    • 2024
  • Frequent false positives alarm from the Intelligent Selective Control System have raised significant concerns. These persistent issues have led to declines in operational efficiency and market credibility among agents. Developing a new model or replacing the existing one to mitigate false positives alarm entails substantial opportunity costs; hence, improving the quality of the training dataset is pragmatic. However, smaller organizations face challenges with inadequate capabilities in dataset collection and refinement. This paper proposes an automatic human pose data collection system centered around a human pose estimation model, utilizing camera-based sensor fusion techniques and edge devices. The system facilitates the direct collection and real-time processing of field data at the network periphery, distributing the computational load that typically centralizes. Additionally, by directly labeling field data, it aids in constructing new training datasets.

Signal processing of accelerometers for motion capture of human body (인체 동작 인식을 위한 가속도 센서의 신호 처리)

  • Lee, Ji-Hong;Ha, In-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.5 no.8
    • /
    • pp.961-968
    • /
    • 1999
  • In this paper we handle a system that transform sensor data to sensor information. Sensor informations from redundant accelerometers are manipulated to represent the configuration of objects carrying sensors. Basic sensor unit of the proposed systme is composed of 3 accelerometers that are aligned along x-y-z coordination axes of motion. To refine the sensor information, at first the sensor data are fused by geometrical optimization to reduce the variance of sensor information. To overcome the error caused from inexact alignment of each sensor to the coordination system, we propose a calibration technique that identifies the transformation between the coordinate axes and real sensor axes. The calibration technique make the sensor information approach real value. Also, we propose a technique that decomposes the accelerometer data into motion acceleration component and gravity acceleration component so that we can get more exact configuration of objects than in the case of raw sensor data. A set of experimental results are given to show the usefulness of the proposed method as well as the experiments in which the proposed techniques are applied to human body motion capture.

  • PDF

Physical Offset of UAVs Calibration Method for Multi-sensor Fusion (다중 센서 융합을 위한 무인항공기 물리 오프셋 검보정 방법)

  • Kim, Cheolwook;Lim, Pyeong-chae;Chi, Junhwa;Kim, Taejung;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1125-1139
    • /
    • 2022
  • In an unmanned aerial vehicles (UAVs) system, a physical offset can be existed between the global positioning system/inertial measurement unit (GPS/IMU) sensor and the observation sensor such as a hyperspectral sensor, and a lidar sensor. As a result of the physical offset, a misalignment between each image can be occurred along with a flight direction. In particular, in a case of multi-sensor system, an observation sensor has to be replaced regularly to equip another observation sensor, and then, a high cost should be paid to acquire a calibration parameter. In this study, we establish a precise sensor model equation to apply for a multiple sensor in common and propose an independent physical offset estimation method. The proposed method consists of 3 steps. Firstly, we define an appropriate rotation matrix for our system, and an initial sensor model equation for direct-georeferencing. Next, an observation equation for the physical offset estimation is established by extracting a corresponding point between a ground control point and the observed data from a sensor. Finally, the physical offset is estimated based on the observed data, and the precise sensor model equation is established by applying the estimated parameters to the initial sensor model equation. 4 region's datasets(Jeon-ju, Incheon, Alaska, Norway) with a different latitude, longitude were compared to analyze the effects of the calibration parameter. We confirmed that a misalignment between images were adjusted after applying for the physical offset in the sensor model equation. An absolute position accuracy was analyzed in the Incheon dataset, compared to a ground control point. For the hyperspectral image, root mean square error (RMSE) for X, Y direction was calculated for 0.12 m, and for the point cloud, RMSE was calculated for 0.03 m. Furthermore, a relative position accuracy for a specific point between the adjusted point cloud and the hyperspectral images were also analyzed for 0.07 m, so we confirmed that a precise data mapping is available for an observation without a ground control point through the proposed estimation method, and we also confirmed a possibility of multi-sensor fusion. From this study, we expect that a flexible multi-sensor platform system can be operated through the independent parameter estimation method with an economic cost saving.

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

Sensor Fusion of Localization using Unscented Kalman Filter (Unscented Kalman filter를 이용한 위치측정 센서융합)

  • Lee, Jun-Ha;Jung, Kyung-Hoon;Kim, Jung-Min;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.667-672
    • /
    • 2011
  • This paper presents to study the sensor fusion of positioning sensors using UKF(unscented Kalman filter) for positioning accuracy improvement of AGV(automatic guided vehicle). The major guidance systems for AGV are wired guidance and magnetic guidance system. Because they have high accuracy and fast response time, they are used in most of the FMS(flexible manufacturing system). However, they had weaknesses that are high maintenance cost and difficult of existing path modification. they are being changed to the laser navigation in recent years because of those problems. The laser navigation is global positioning sensor using reflecters on the wall, and it have high accuracy and easy to modify the path. However, its response time is slow and it is influenced easily by disturbance. In this paper, we propose the sensor fusion method of the laser navigation and local sensors using UKF. The proposed method is improvement method of accuracy through error analysis of sensors. For experiments, we used the axle-driven forklift AGV and compared the positioning results of the proposed method with positioning results of the laser navigation. In experimental result, we verified that the proposed method can improve positioning accuracy about 16%.