• Title/Summary/Keyword: sensor fusion

Search Result 811, Processing Time 0.023 seconds

A Kalman filter with sensor fusion for indoor position estimation (실내 측위 추정을 위한 센서 융합과 결합된 칼만 필터)

  • Janghoon Yang
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.441-449
    • /
    • 2021
  • With advances in autonomous vehicles, there is a growing demand for more accurate position estimation. Especially, this is a case for a moving robot for the indoor operation which necessitates the higher accuracy in position estimation when the robot is required to execute the task at a predestined location. Thus, a method for improving the position estimation which is applicable to both the fixed and the moving object is proposed. The proposed method exploits the initial position estimation from Bluetooth beacon signals as observation signals. Then, it estimates the gravitational acceleration applied to each axis in an inertial frame coordinate through computing roll and pitch angles and combining them with magnetometer measurements to compute yaw angle. Finally, it refines the control inputs for an object with motion dynamics by computing acceleration on each axis, which is used for improving the performance of Kalman filter. The experimental assessment of the proposed algorithm shows that it improves the position estimation accuracy in comparison to a conventional Kalman filter in terms of average error distance at both the fixed and moving states.

Experimental Analysis of Physical Signal Jamming Attacks on Automotive LiDAR Sensors and Proposal of Countermeasures (차량용 LiDAR 센서 물리적 신호교란 공격 중심의 실험적 분석과 대응방안 제안)

  • Ji-ung Hwang;Yo-seob Yoon;In-su Oh;Kang-bin Yim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.2
    • /
    • pp.217-228
    • /
    • 2024
  • LiDAR(Light Detection And Ranging) sensors, which play a pivotal role among cameras, RADAR(RAdio Detection And Ranging), and ultrasonic sensors for the safe operation of autonomous vehicles, can recognize and detect objects in 360 degrees. However, since LiDAR sensors use lasers to measure distance, they are vulnerable to attackers and face various security threats. In this paper, we examine several security threats against LiDAR sensors: relay, spoofing, and replay attacks, analyze the possibility and impact of physical jamming attacks, and analyze the risk these attacks pose to the reliability of autonomous driving systems. Through experiments, we show that jamming attacks can cause errors in the ranging ability of LiDAR sensors. With vehicle-to-vehicle (V2V) communication, multi-sensor fusion under development and LiDAR anomaly data detection, this work aims to provide a basic direction for countermeasures against these threats enhancing the security of autonomous vehicles, and verify the practical applicability and effectiveness of the proposed countermeasures in future research.

Object-based Building Change Detection Using Azimuth and Elevation Angles of Sun and Platform in the Multi-sensor Images (태양과 플랫폼의 방위각 및 고도각을 이용한 이종 센서 영상에서의 객체기반 건물 변화탐지)

  • Jung, Sejung;Park, Jueon;Lee, Won Hee;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_2
    • /
    • pp.989-1006
    • /
    • 2020
  • Building change monitoring based on building detection is one of the most important fields in terms of monitoring artificial structures using high-resolution multi-temporal images such as CAS500-1 and 2, which are scheduled to be launched. However, not only the various shapes and sizes of buildings located on the surface of the Earth, but also the shadows or trees around them make it difficult to detect the buildings accurately. Also, a large number of misdetection are caused by relief displacement according to the azimuth and elevation angles of the platform. In this study, object-based building detection was performed using the azimuth angle of the Sun and the corresponding main direction of shadows to improve the results of building change detection. After that, the platform's azimuth and elevation angles were used to detect changed buildings. The object-based segmentation was performed on a high-resolution imagery, and then shadow objects were classified through the shadow intensity, and feature information such as rectangular fit, Gray-Level Co-occurrence Matrix (GLCM) homogeneity and area of each object were calculated for building candidate detection. Then, the final buildings were detected using the direction and distance relationship between the center of building candidate object and its shadow according to the azimuth angle of the Sun. A total of three methods were proposed for the building change detection between building objects detected in each image: simple overlay between objects, comparison of the object sizes according to the elevation angle of the platform, and consideration of direction between objects according to the azimuth angle of the platform. In this study, residential area was selected as study area using high-resolution imagery acquired from KOMPSAT-3 and Unmanned Aerial Vehicle (UAV). Experimental results have shown that F1-scores of building detection results detected using feature information were 0.488 and 0.696 respectively in KOMPSAT-3 image and UAV image, whereas F1-scores of building detection results considering shadows were 0.876 and 0.867, respectively, indicating that the accuracy of building detection method considering shadows is higher. Also among the three proposed building change detection methods, the F1-score of the consideration of direction between objects according to the azimuth angles was the highest at 0.891.

Classification of Multi-temporal SAR Data by Using Data Transform Based Features and Multiple Classifiers (자료변환 기반 특징과 다중 분류자를 이용한 다중시기 SAR자료의 분류)

  • Yoo, Hee Young;Park, No-Wook;Hong, Sukyoung;Lee, Kyungdo;Kim, Yeseul
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.3
    • /
    • pp.205-214
    • /
    • 2015
  • In this study, a novel land-cover classification framework for multi-temporal SAR data is presented that can combine multiple features extracted through data transforms and multiple classifiers. At first, data transforms using principle component analysis (PCA) and 3D wavelet transform are applied to multi-temporal SAR dataset for extracting new features which were different from original dataset. Then, three different classifiers including maximum likelihood classifier (MLC), neural network (NN) and support vector machine (SVM) are applied to three different dataset including data transform based features and original backscattering coefficients, and as a result, the diverse preliminary classification results are generated. These results are combined via a majority voting rule to generate a final classification result. From an experiment with a multi-temporal ENVISAT ASAR dataset, every preliminary classification result showed very different classification accuracy according to the used feature and classifier. The final classification result combining nine preliminary classification results showed the best classification accuracy because each preliminary classification result provided complementary information on land-covers. The improvement of classification accuracy in this study was mainly attributed to the diversity from combining not only different features based on data transforms, but also different classifiers. Therefore, the land-cover classification framework presented in this study would be effectively applied to the classification of multi-temporal SAR data and also be extended to multi-sensor remote sensing data fusion.

Image Restoration and Segmentation for PAN-sharpened High Multispectral Imagery (PAN-SHARPENED 고해상도 다중 분광 자료의 영상 복원과 분할)

  • Lee, Sanghoon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.6_1
    • /
    • pp.1003-1017
    • /
    • 2017
  • Multispectral image data of high spatial resolution is required to obtain correct information on the ground surface. The multispectral image data has lower resolution compared to panchromatic data. PAN-sharpening fusion technique produces the multispectral data with higher resolution of panchromatic image. Recently the object-based approach is more applied to the high spatial resolution data than the conventional pixel-based one. For the object-based image analysis, it is necessary to perform image segmentation that produces the objects of pixel group. Image segmentation can be effectively achieved by the process merging step-by-step two neighboring regions in RAG (Regional Adjacency Graph). In the satellite remote sensing, the operational environment of the satellite sensor causes image degradation during the image acquisition. This degradation increases variation of pixel values in same area, and results in deteriorating the accuracy of image segmentation. An iterative approach that reduces the difference of pixel values in two neighboring pixels of same area is employed to alleviate variation of pixel values in same area. The size of segmented regions is associated with the quality of image segmentation and is decided by a stopping rue in the merging process. In this study, the image restoration and segmentation was quantitatively evaluated using simulation data and was also applied to the three PAN-sharpened multispectral images of high resolution: Dubaisat-2 data of 1m panchromatic resolution from LA, USA and KOMPSAT3 data of 0.7m panchromatic resolution from Daejeon and Chungcheongnam-do in the Korean peninsula. The experimental results imply that the proposed method can improve analytical accuracy in the application of remote sensing high resolution PAN-sharpened multispectral imagery.

A Study on the Use of Drones for Disaster Damage Investigation in Mountainous Terrain (산악지형에서의 재난피해조사를 위한 드론 맵핑 활용방안 연구)

  • Shin, Dongyoon;Kim, Dajinsol;Kim, Seongsam;Han, Youkyung;Nho, Hyunju
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_4
    • /
    • pp.1209-1220
    • /
    • 2020
  • In the case of forest areas, the installation of ground control points (GCPs) and the selection of terrain features, which are one of the unmanned aerial photogrammetry work process, are limited compared to urban areas, and safety problems arise due to non-visible flight due to high forest. To compensate for this problem, the drone equipped with a real time kinematic (RTK) sensor that corrects the position of the drone in real time, and a 3D flight method that fly based on terrain information are being developed. This study suggests to present a method for investigating damage using drones in forest areas. Position accuracy evaluation was performed for three methods: 1) drone mapping through GCP measurement (normal mapping), 2) drone mapping based on topographic data (3D flight mapping), 3) drone mapping using RTK drone (RTK mapping), and all showed an accuracy within 2 cm in the horizontal and within 13 cm in the vertical position. After evaluating the position accuracy, the volume of the landslide area was calculated and the volume values were compared, and all showed similar values. Through this study, the possibility of utilizing 3D flight mapping and RTK mapping in forest areas was confirmed. In the future, it is expected that more effective damage investigations can be conducted if the three methods are appropriately used according to the conditions of area of the disaster.

An Exploratory Study of Searching Human Body Segments for Motion Sensors of Smart Sportswear: Focusing on Rowing Motion (동작에 따른 피부변화 분석을 통한 동작센서 부착의 최적위치 탐색: 조정 동작을 중심으로)

  • Han, Bo-Ram;Park, Seonhyung;Cho, Hyun-Seung;Kang, Bokku;Kim, Jin-Sun;Lee, Joohyeon;Kim, Han Sung;Lee, Hae-Dong
    • Science of Emotion and Sensibility
    • /
    • v.20 no.1
    • /
    • pp.17-30
    • /
    • 2017
  • Lots of interdisciplinary studies for fusion of high technologes and the other areas of research had been tried in these days. In sports training area, high technologies like a vital sign sensor or an accelerometer were adopted as training tools to improve the performance of the sports players. The purpose of this study is finding the proper locations on the human body for attaching the motion sensors in order to develop a smart sportswear which could be helpful in training players. The rowing was selected as a subject sport as lots of movements of the joint on human body could be seen in rowing motions. The players of rowing could be devided into two weight divisions, the lightweight and the heavyweight. In this study, the change rates of distance between markers on human skin as the players moved were took on the back, the elbow, the hip and the knee area on human body by 3D motion capturing system. The distances between markers and the differences between the lightweight and heavyweight were analyzed. Finally, this study provided the guide lines for designing a motion sensing smart sportswear.

Orthophoto and DEM Generation Using Low Specification UAV Images from Different Altitudes (고도가 다른 저사양 UAV 영상을 이용한 정사영상 및 DEM 제작)

  • Lee, Ki Rim;Lee, Won Hee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.535-544
    • /
    • 2016
  • Even though existing methods for orthophoto production using expensive aircraft are effective in large areas, they are drawbacks when dealing with renew quickly according to geographic features. But, as UAV(Unmanned Aerial Vehicle) technology has advanced rapidly, and also by loading sensors such as GPS and IMU, they are evaluates that these UAV and sensor technology can substitute expensive traditional aerial photogrammetry. Orthophoto production by using UAV has advantages that spatial information of small area can be updated quickly. But in the case of existing researches, images of same altitude are used in orthophoto generation, they are drawbacks about repetition of data and renewal of data. In this study, we targeted about small slope area, and by using low-end UAV, generated orthophoto and DEM(Digital Elevation Model) through different altitudinal images. The RMSE of the check points is σh = 0.023m on a horizontal plane and σv = 0.049m on a vertical plane. This maximum value and mean RMSE are in accordance with the working rule agreement for the aerial photogrammetry of the National Geographic Information Institute(NGII) on a 1/500 scale digital map. This paper suggests that generate orthophoto of high accuracy using a different altitude images. Reducing the repetition of data through images of different altitude and provide the informations about the spatial information quickly.

Bundle Block Adjustment of Omni-directional Images by a Mobile Mapping System (모바일매핑시스템으로 취득된 전방위 영상의 광속조정법)

  • Oh, Tae-Wan;Lee, Im-Pyeong
    • Korean Journal of Remote Sensing
    • /
    • v.26 no.5
    • /
    • pp.593-603
    • /
    • 2010
  • Most spatial data acquisition systems employing a set of frame cameras may have suffered from their small fields of view and poor base-distance ratio. These limitations can be significantly reduced by employing an omni-directional camera that is capable of acquiring images in every direction. Bundle Block Adjustment (BBA) is one of the existing georeferencing methods to determine the exterior orientation parameters of two or more images. In this study, by extending the concept of the traditional BBA method, we attempt to develop a mathematical model of BBA for omni-directional images. The proposed mathematical model includes three main parts; observation equations based on the collinearity equations newly derived for omni-directional images, stochastic constraints imposed from GPS/INS data and GCPs. We also report the experimental results from the application of our proposed BBA to the real data obtained mainly in urban areas. With the different combinations of the constraints, we applied four different types of mathematical models. With the type where only GCPs are used as the constraints, the proposed BBA can provide the most accurate results, ${\pm}5cm$ of RMSE in the estimated ground point coordinates. In future, we plan to perform more sophisticated lens calibration for the omni-directional camera to improve the georeferencing accuracy of omni-directional images. These georeferenced omni-directional images can be effectively utilized for city modelling, particularly autonomous texture mapping for realistic street view.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.