• Title/Summary/Keyword: Multi-Sensor Image

Search Result 286, Processing Time 0.031 seconds

Matching and Geometric Correction of Multi-Resolution Satellite SAR Images Using SURF Technique (SURF 기법을 활용한 위성 SAR 다중해상도 영상의 정합 및 기하보정)

  • Kim, Ah-Leum;Song, Jung-Hwan;Kang, Seo-Li;Lee, Woo-Kyung
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.431-444
    • /
    • 2014
  • As applications of spaceborne SAR imagery are extended, there are increased demands for accurate registrations for better understanding and fusion of radar images. It becomes common to adopt multi-resolution SAR images to apply for wide area reconnaissance. Geometric correction of the SAR images can be performed by using satellite orbit and attitude information. However, the inherent errors of the SAR sensor's attitude and ground geographical data tend to cause geometric errors in the produced SAR image. These errors should be corrected when the SAR images are applied for multi-temporal analysis, change detection applications and image fusion with other sensor images. The undesirable ground registration errors can be corrected with respect to the true ground control points in order to produce complete SAR products. Speeded Up Robust Feature (SURF) technique is an efficient algorithm to extract ground control points from images but is considered to be inappropriate to apply to SAR images due to high speckle noises. In this paper, an attempt is made to apply SURF algorithm to SAR images for image registration and fusion. Matched points are extracted with respect to the varying parameters of Hessian and SURF matching thresholds, and the performance is analyzed by measuring the imaging matching accuracies. A number of performance measures concerning image registration are suggested to validate the use of SURF for spaceborne SAR images. Various simulations methodologies are suggested the validate the use of SURF for the geometric correction and image registrations and it is shown that a good choice of input parameters to the SURF algorithm should be made to apply for the spaceborne SAR images of moderate resolutions.

Multi-Object Tracking using the Color-Based Particle Filter in ISpace with Distributed Sensor Network

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.46-51
    • /
    • 2005
  • Intelligent Space(ISpace) is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Simulations are carried out to evaluate the proposed performance. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

Scaling Attack Method for Misalignment Error of Camera-LiDAR Calibration Model (카메라-라이다 융합 모델의 오류 유발을 위한 스케일링 공격 방법)

  • Yi-ji Im;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.6
    • /
    • pp.1099-1110
    • /
    • 2023
  • The recognition system of autonomous driving and robot navigation performs vision work such as object recognition, tracking, and lane detection after multi-sensor fusion to improve performance. Currently, research on a deep learning model based on the fusion of a camera and a lidar sensor is being actively conducted. However, deep learning models are vulnerable to adversarial attacks through modulation of input data. Attacks on the existing multi-sensor-based autonomous driving recognition system are focused on inducing obstacle detection by lowering the confidence score of the object recognition model.However, there is a limitation that an attack is possible only in the target model. In the case of attacks on the sensor fusion stage, errors in vision work after fusion can be cascaded, and this risk needs to be considered. In addition, an attack on LIDAR's point cloud data, which is difficult to judge visually, makes it difficult to determine whether it is an attack. In this study, image scaling-based camera-lidar We propose an attack method that reduces the accuracy of LCCNet, a fusion model (camera-LiDAR calibration model). The proposed method is to perform a scaling attack on the point of the input lidar. As a result of conducting an attack performance experiment by size with a scaling algorithm, an average of more than 77% of fusion errors were caused.

A Study on Point Cloud Generation Method from UAV Image Using Incremental Bundle Adjustment and Stereo Image Matching Technique (Incremental Bundle Adjustment와 스테레오 영상 정합 기법을 적용한 무인항공기 영상에서의 포인트 클라우드 생성방안 연구)

  • Rhee, Sooahm;Hwang, Yunhyuk;Kim, Soohyeon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.941-951
    • /
    • 2018
  • Utilization and demand of UAV (unmanned aerial vehicle) for the generation of 3D city model are increasing. In this study, we performed an experiment to adjustment position/orientation of UAV with incomplete attitude information and to extract point cloud data. In order to correct the attitude of the UAV, the rotation angle was calculated by using the continuous position information of UAV movements. Based on this, the corrected position/orientation information was obtained by applying IBA (Incremental Bundle Adjustment) based on photogrammetry. Each pair was transformed into an epipolar image, and the MDR (Multi-Dimensional Relaxation) technique was applied to obtain high precision DSM. Each extracted pair is aggregated and output in the form of a single point cloud or DSM. Using the DJI inspire1 and Phantom4 images, we can confirm that the point cloud can be extracted which expresses the railing of the building clearly. In the future, research will be conducted on improving the matching performance and establishing sensor models of oblique images. After that, we will continue the image processing technology for the generation of the 3D city model through the study of the extraction of 3D cloud It should be developed.

Multi Point Cloud Integration based on Observation Vectors between Stereo Images (스테레오 영상 간 관측 벡터에 기반한 다중 포인트 클라우드 통합)

  • Yoon, Wansang;Kim, Han-gyeol;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.727-736
    • /
    • 2019
  • In this paper, we present how to create a point cloud for a target area using multiple unmanned aerial vehicle images and to remove the gaps and overlapping points between datasets. For this purpose, first, IBA (Incremental Bundle Adjustment) technique was applied to correct the position and attitude of UAV platform. We generate a point cloud by using MDR (Multi-Dimensional Relaxation) matching technique. Next, we register point clouds based on observation vectors between stereo images by doing this we remove gaps between point clouds which are generated from different stereo pairs. Finally, we applied an occupancy grids based integration algorithm to remove duplicated points to create an integrated point cloud. The experiments were performed using UAV images, and our experiments show that it is possible to remove gaps and duplicate points between point clouds generated from different stereo pairs.

Intelligent Lighting Control using Wireless Sensor Networks for Media Production

  • Park, Hee-Min;Burke, Jeff;Srivastava, Mani B.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.3 no.5
    • /
    • pp.423-443
    • /
    • 2009
  • We present the design and implementation of a unique sensing and actuation application -- the Illuminator: a sensor network-based intelligent light control system for entertainment and media production. Unlike most sensor network applications, which focus on sensing alone, a distinctive aspect of the Illuminator is that it closes the loop from light sensing to lighting control. We describe the Illuminator's design requirements, system architecture, algorithms, implementation and experimental results. The system uses the Illumimote, a multi-modal and high fidelity light sensor module well-suited for wireless sensor networks, to satisfy the high-performance light sensing requirements of entertainment and media production applications. The Illuminator system is a toolset to characterize the illumination profile of a deployed set of fixed position lights, generate desired lighting effects for moving targets (actors, scenic elements, etc.) based on user constraints expressed in a formal language, and to assist in the set up of lights to achieve the same illumination profile in multiple venues. After characterizing deployed lights, the Illuminator computes optimal light settings at run-time to achieve a user-specified actuation profile, using an optimization framework based on a genetic algorithm. Uniquely, it can use deployed sensors to incorporate changing ambient lighting conditions and moving targets into actuation. Experimental results demonstrate that the Illuminator handles various high-level user requirements and generates an optimal light actuation profile. These results suggest that the Illuminator system supports entertainment and media production applications.

Design of Pattern Array Method for Multi Data Augmentation of Power Equipment uisng Single Image Pattern (단일 이미지 패턴을 이용한 다수의 전력설비 데이터를 증강하기 위한 패턴 배열화 기법 설계)

  • Kim, Seoksoo
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.11
    • /
    • pp.1-8
    • /
    • 2020
  • As power consumption is maximized, research on augmented reality-based monitoring systems for on-site facility managers to maintain and repair power facilities is being actively conducted as individual power brokerages and power production facilities increase. However, in the case of existing augmented reality-based monitoring systems, it is difficult to accurately detect patterns due to problems such as external environment, facility complexity, and interference with the lighting environment, and it is not possible to match various sensing information and service information for power facilities to one pattern. there is a problem. For this reason, since sensor information is matched using a single image pattern for each sensor of a power facility, a plurality of image patterns are required to augment and provide all information. In this paper, we propose a single image pattern arrangement method that matches and provides a plurality of information through an array combination of feature patterns in a single image composed of a plurality of feature patterns.

AUTOMATIC ROAD NETWORK EXTRACTION. USING LIDAR RANGE AND INTENSITY DATA

  • Kim, Moon-Gie;Cho, Woo-Sug
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.79-82
    • /
    • 2005
  • Recently the necessity of road data is still being increased in industrial society, so there are many repairing and new constructions of roads at many areas. According to the development of government, city and region, the update and acquisition of road data for GIS (Geographical Information System) is very necessary. In this study, the fusion method with range data(3D Ground Coordinate System Data) and Intensity data in stand alone LiDAR data is used for road extraction and then digital image processing method is applicable. Up to date Intensity data of LiDAR is being studied. This study shows the possibility method for road extraction using Intensity data. Intensity and Range data are acquired at the same time. Therefore LiDAR does not have problems of multi-sensor data fusion method. Also the advantage of intensity data is already geocoded, same scale of real world and can make ortho-photo. Lastly, analysis of quantitative and quality is showed with extracted road image which compare with I: 1,000 digital map.

  • PDF

Super-resolution image enhancement by Papoulis-Gerchbergmethod improvement (Papoulis-Gerchberg 방법의 개선에 의한 초해상도 영상 화질 향상)

  • Jang, Hyo-Sik;Kim, Duk-Gyoo;Jung, Yoon-Soo;Lee, Tae-Gyoun;Won, Chul-Ho
    • Journal of Sensor Science and Technology
    • /
    • v.19 no.2
    • /
    • pp.118-123
    • /
    • 2010
  • This paper proposes super-resolution reconstruction algorithm for image enhancement. Super-resolution reconstruction algorithms reconstruct a high-resolution image from multi-frame low-resolution images of a scene. Conventional super- resolution reconstruction algorithms are iterative back-projection(IBP), robust super-resolution(RS)method and standard Papoulis-Gerchberg(PG)method. However, traditional methods have some problems such as rotation and ringing. So, this paper proposes modified algorithm to improve the problem. Experimental results show that this proposed algorithm solve the problem. As a result, the proposed method showed an increase in the PSNR for traditional super-resolution reconstruction algorithms.

Analysis of Non-Point Pollution Sources in the Taewha River Area Using the Hyper-Sensor Information (하이퍼센서 정보를 이용한 태화강지역의 비점오염원 분석)

  • KIM, Yong-Suk
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.1
    • /
    • pp.56-70
    • /
    • 2017
  • In this study, multi-image information for the central Taewha River basin was used to develop and analyze a distribution map of non-point pollution sources. The data were collected using a hyper-sensor (image), aerial photography, and a field spectro-radiometer. An image correction process was performed for each image to develop an ortho-image. In addition, the spectra from the field spectro-radiometer measurements were analyzed for each classification to create land cover and distribution maps of non-point pollutant sources. In the western region of the Taewha River basin, where most of the forest and agricultural land is distributed, the distribution map showed generated loads for BOD($kg/km^2{\times}day$) of 1.0 - 2.3, for TN($kg/km^2{\times}day$) of 0.06 - 9.44, and for TP($kg/km^2{\times}day$) of 0.03 - 0.24, which were low load distributions. In the eastern region where urbanization is in progress, the BOD, TN, and TP were 85.9, 13.69, and 2.76, respectively and these showed relatively high load distributions when the land use was classified by plot.