• Title/Summary/Keyword: Sensor registration

Search Result 79, Processing Time 0.022 seconds

Anonymity for Low-Power Sensor Node in Ubiquitous Network (유비쿼터스 네트워크에서 저 전력 센서노드의 익명성)

  • Kim, Dong-Myung;Woo, Sung-Hee;Lee, Sang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.1 s.39
    • /
    • pp.177-184
    • /
    • 2006
  • The sensors in a ubiquitous network are limited because of the low power and ultra light weight, so many studies have revolved around the sensor. This study improves the process of the registration and authorization and suggests a way to minimize discloser of privacy by using an alias. We introduce RA(Relay Agent) for the restrict function of sensor node, and improve anonymity for private information of each sensor node by assigning alias from SM(Service Manager) in procedure of registration and authentication. The privacy of sensor node is secure in procedure of registration, authentication, and communication between nodes. We could improve the level of security with the only partial increment of computation power of RA and SM without an increase in the amount of sensor nodes.

  • PDF

Real-time Localization of An UGV based on Uniform Arc Length Sampling of A 360 Degree Range Sensor (전방향 거리 센서의 균일 원호길이 샘플링을 이용한 무인 이동차량의 실시간 위치 추정)

  • Park, Soon-Yong;Choi, Sung-In
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.48 no.6
    • /
    • pp.114-122
    • /
    • 2011
  • We propose an automatic localization technique based on Uniform Arc Length Sampling (UALS) of 360 degree range sensor data. The proposed method samples 3D points from dense a point-cloud which is acquired by the sensor, registers the sampled points to a digital surface model(DSM) in real-time, and determines the location of an Unmanned Ground Vehicle(UGV). To reduce the sampling and registration time of a sequence of dense range data, 3D range points are sampled uniformly in terms of ground sample distance. Using the proposed method, we can reduce the number of 3D points while maintaining their uniformity over range data. We compare the registration speed and accuracy of the proposed method with a conventional sample method. Through several experiments by changing the number of sampling points, we analyze the speed and accuracy of the proposed method.

Co-registration Between PAN and MS Bands Using Sensor Modeling and Image Matching (센서모델링과 영상매칭을 통한 PAN과 MS 밴드간 상호좌표등록)

  • Lee, Chang No;Oh, Jae Hong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.1
    • /
    • pp.13-21
    • /
    • 2021
  • High-resolution satellites such as Kompsat-3 and CAS-500 include optical cameras of MS (Multispectral) and PAN (Panchromatic) CCD (Charge Coupled Device) sensors installed with certain offsets. The offsets between the CCD sensors produce geometric discrepancy between MS and PAN images because a ground target is imaged at slightly different times for MS and PAN sensors. For precise pan-sharpening process, we propose a co-registration process consisting the physical sensor modeling and image matching. The physical sensor model enables the initial co-registration and the image matching is carried out for further refinement. An experiment with Kompsat-3 images produced RMSE (Root Mean Square Error) 0.2pixels level of geometric discrepancy between MS and PAN images.

Fine-image Registration between Multi-sensor Satellite Images for Global Fusion Application of KOMPSAT-3·3A Imagery (KOMPSAT-3·3A 위성영상 글로벌 융합활용을 위한 다중센서 위성영상과의 정밀영상정합)

  • Kim, Taeheon;Yun, Yerin;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_4
    • /
    • pp.1901-1910
    • /
    • 2022
  • Arriving in the new space age, securing technology for fusion application of KOMPSAT-3·3A and global satellite images is becoming more important. In general, multi-sensor satellite images have relative geometric errors due to various external factors at the time of acquisition, degrading the quality of the satellite image outputs. Therefore, we propose a fine-image registration methodology to minimize the relative geometric error between KOMPSAT-3·3A and global satellite images. After selecting the overlapping area between the KOMPSAT-3·3A and foreign satellite images, the spatial resolution between the two images is unified. Subsequently, tie-points are extracted using a hybrid matching method in which feature- and area-based matching methods are combined. Then, fine-image registration is performed through iterative registration based on pyramid images. To evaluate the performance and accuracy of the proposed method, we used KOMPSAT-3·3A, Sentinel-2A, and PlanetScope satellite images acquired over Daejeon city, South Korea. As a result, the average RMSE of the accuracy of the proposed method was derived as 1.2 and 3.59 pixels in Sentinel-2A and PlanetScope images, respectively. Consequently, it is considered that fine-image registration between multi-sensor satellite images can be effectively performed using the proposed method.

Registration Method between High Resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 정합 기법)

  • Jeon, Hyeongju;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.739-747
    • /
    • 2018
  • Integration analysis of multi-sensor satellite images is becoming increasingly important. The first step in integration analysis is image registration between multi-sensor. SIFT (Scale Invariant Feature Transform) is a representative image registration method. However, optical image and SAR (Synthetic Aperture Radar) images are different from sensor attitude and radiation characteristics during acquisition, making it difficult to apply the conventional method, such as SIFT, because the radiometric characteristics between images are nonlinear. To overcome this limitation, we proposed a modified method that combines the SAR-SIFT method and shape descriptor vector DLSS(Dense Local Self-Similarity). We conducted an experiment using two pairs of Cosmo-SkyMed and KOMPSAT-2 images collected over Daejeon, Korea, an area with a high density of buildings. The proposed method extracted the correct matching points when compared to conventional methods, such as SIFT and SAR-SIFT. The method also gave quantitatively reasonable results for RMSE of 1.66m and 2.45m over the two pairs of images.

Automatic Registration Method for EO/IR Satellite Image Using Modified SIFT and Block-Processing (Modified SIFT와 블록프로세싱을 이용한 적외선과 광학 위성영상의 자동정합기법)

  • Lee, Kang-Hoon;Choi, Tae-Sun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.4 no.3
    • /
    • pp.174-181
    • /
    • 2011
  • A new registration method for IR image and EO image is proposed in this paper. IR sensor is applicable to many area because it absorbs thermal radiation energy unlike EO sensor does. However, IR sensor has difficulty to extract and match features due to low contrast compared to EO image. In order to register both images, we used modified SIFT(Scale Invariant Feature Transform) and block processing to increase feature distinctiveness. To remove outlier, we applied RANSAC(RANdom SAample Concensus) for each block. Finally, we unified matching features into single coordinate system and remove outlier again. We used 3~5um range IR image, and our experiment result showed good robustness in registration with IR image.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Fine Registration between Very High Resolution Satellite Images Using Registration Noise Distribution (등록오차 분포특성을 이용한 고해상도 위성영상 간 정밀 등록)

  • Han, Youkyung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.3
    • /
    • pp.125-132
    • /
    • 2017
  • Even after applying an image registration, Very High Resolution (VHR) multi-temporal images acquired from different optical satellite sensors such as IKONOS, QuickBird, and Kompsat-2 show a local misalignment due to dissimilarities in sensor properties and acquisition conditions. As the local misalignment, also referred to as Registration Noise (RN), is likely to have a negative impact on multi-temporal information extraction, detecting and reducing the RN can improve the multi-temporal image processing performance. In this paper, an approach to fine registration between VHR multi-temporal images by considering local distribution of RN is proposed. Since the dominant RN mainly exists along boundaries of objects, we use edge information in high frequency regions to identify it. In order to validate the proposed approach, datasets are built from VHR multi-temporal images acquired by optical satellite sensors. Both qualitative and quantitative assessments confirm the effectiveness of the proposed RN-based fine registration approach compared to the manual registration.

Self-localization for Mobile Robot Navigation using an Active Omni-directional Range Sensor (전방향 능동 거리 센서를 이용한 이동로봇의 자기 위치 추정)

  • Joung, In-Soo;Cho, Hyung-Suck
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.16 no.1 s.94
    • /
    • pp.253-264
    • /
    • 1999
  • Most autonomous mobile robots view only things in front of them, and as a result, they may collide with objects moving from the side or behind. To overcome this problem. an Active Omni-directional Range Sensor System has been built that can obtain an omni-directional range data through the use of a laser conic plane and a conic mirror. Also, mobile robot has to know its current location and heading angle by itself as accurately as possible to successfully navigate in real environments. To achieve this capability, we propose a self-localization algorithm of a mobile robot using an active omni-directional range sensor in an unknown environment. The proposed algorithm estimates the current position and head angle of a mobile robot by a registration of the range data obtained at two positions, current and previous. To show the effectiveness of the proposed algorithm, a series of simulations was conducted and the results show that the proposed algorithm is very efficient, and can be utilized for self-localization of a mobile robot in an unknown environment.

  • PDF

Prototype Development of a Robotic System for Skull Drilling (로봇을 이용한 두개골 드릴링 시스템의 프로토타입 개발)

  • Chung, Yun-Chan
    • Korean Journal of Computational Design and Engineering
    • /
    • v.17 no.3
    • /
    • pp.198-207
    • /
    • 2012
  • This paper presents an overview of automated robotic system for skull drilling, which is performed to access for some neurosurgical interventions, such as brain tumor resection. Currently surgeons use automatic-releasing cranial perforators. The drilling procedure must be performed very carefully to avoid penetration of brain nerve structures; however failure cases are reported. The presented prototype system utilizes both preoperative and intraoperative information. Preoperative CT image is used for robot path planning. A NeuroMate robot with a six-DOF force sensor at the end effector is used for intraoperative operation. Intraoperative cutting force from the force sensor is the key information to revise an initial registration and preoperative path plans. Some possibilities are verified by path simulation but cadaver experiments are required for validation of this prototype.