• Title/Summary/Keyword: Sensor points

Search Result 664, Processing Time 0.031 seconds

Investigation of physical sensor models for orbit modeling

  • Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.217-220
    • /
    • 2005
  • Currently, a number of control points are required in order to achieve accurate geolocation of satellite images. Control points can be generated from existing maps or surveying, or, preferably, from GPS measurements. The requirement of control points increase the cost of satellite mapping, let alone it makes the mapping over inaccessible areas troublesome. This paper investigates the possibilities of modeling an entire imaging strip with control points obtained from a small portion of the strip. We tested physical sensor models that were based on satellite orbit and attitude angles. It was anticipated that orbit modeling needed a sensor model with good accuracy of exterior orientation estimation, rather then the accuracy of bundle adjustment. We implemented sensor models with various parameter sets and checked their accuracy when applied to the scenes on the same orbital strip together with the bundle adjustment accuracy and the accuracy of estimated exterior orientation parameters. Results showed that although the models with good bundle adjustments accuracy did not always good orbit modeling and that the models with simple unknowns could be used for orbit modeling.

  • PDF

Feasibility Analysis of Precise Sensor Modelling for KOMPSAT-3A Imagery Using Unified Control Points (통합기준점을 이용한 KOMPSAT-3A 영상의 정밀센서모델링 가능성 분석)

  • Yoon, Wansang;Park, HyeongJun;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1089-1100
    • /
    • 2018
  • In this paper, we analyze the feasibility of establishing a precise sensor model for high-resolution satellite imagery using unified control points. For this purpose, we integrated unified control points and the aerial orthoimages from the national land information map (http://map.ngii.go.kr/ms/map/NlipMap.do) operated by the National Geographic Information Institute (NGII). Then, we collected the image coordinates corresponding to the unified control point's location in the satellite image. The unified control points were used as observation data for establishing a precise sensor model. For the experiment, we compared the results of precise sensor modeling using GNSS survey data and those using unified control points. Our experimental results showed that it is possible to establish a precise sensor model with around 2 m accuracy when using unified control points.

An Optimal Algorithm for the Sensor Location Problem to Cover Sensor Networks

  • Kim Hee-Seon;Park Sung-Soo
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 2006.05a
    • /
    • pp.17-24
    • /
    • 2006
  • We consider the sensor location problem (SLP) on a given sensor field. We present the sensor field as grid of points. There are several types of sensors which have different detection ranges and costs. If a sensor is placed in some point, the points inside of its detection range can be covered. The coverage ratio decreases with distance. The problem we consider in this thesis is called multiple-type differential coverage sensor location problem (MDSLP). MDSLP is more realistic than SLP. The coverage quantities of points are different with their distance form sensor location in MDSLP. The objective of MDSLP is to minimize total sensor costs while covering every sensor field. This problem is known as NP-hard. We propose a new integer programming formulation of the problem. In comparison with the previous models, the new model has a smaller number of constraints and variables. This problem has symmetric structure in its solutions. This group is used for pruning in the branch-and-bound tree. We solved this problem by branch-and-cut(B&C) approach. We tested our algorithm on about 60 instances with varying sizes.

  • PDF

Refinements of Multi-sensor based 3D Reconstruction using a Multi-sensor Fusion Disparity Map (다중센서 융합 상이 지도를 통한 다중센서 기반 3차원 복원 결과 개선)

  • Kim, Si-Jong;An, Kwang-Ho;Sung, Chang-Hun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.4 no.4
    • /
    • pp.298-304
    • /
    • 2009
  • This paper describes an algorithm that improves 3D reconstruction result using a multi-sensor fusion disparity map. We can project LRF (Laser Range Finder) 3D points onto image pixel coordinatesusing extrinsic calibration matrixes of a camera-LRF (${\Phi}$, ${\Delta}$) and a camera calibration matrix (K). The LRF disparity map can be generated by interpolating projected LRF points. In the stereo reconstruction, we can compensate invalid points caused by repeated pattern and textureless region using the LRF disparity map. The result disparity map of compensation process is the multi-sensor fusion disparity map. We can refine the multi-sensor 3D reconstruction based on stereo vision and LRF using the multi-sensor fusion disparity map. The refinement algorithm of multi-sensor based 3D reconstruction is specified in four subsections dealing with virtual LRF stereo image generation, LRF disparity map generation, multi-sensor fusion disparity map generation, and 3D reconstruction process. It has been tested by synchronized stereo image pair and LRF 3D scan data.

  • PDF

Automatic Image Registration Based on Extraction of Corresponding-Points for Multi-Sensor Image Fusion (다중센서 영상융합을 위한 대응점 추출에 기반한 자동 영상정합 기법)

  • Choi, Won-Chul;Jung, Jik-Han;Park, Dong-Jo;Choi, Byung-In;Choi, Sung-Nam
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.12 no.4
    • /
    • pp.524-531
    • /
    • 2009
  • In this paper, we propose an automatic image registration method for multi-sensor image fusion such as visible and infrared images. The registration is achieved by finding corresponding feature points in both input images. In general, the global statistical correlation is not guaranteed between multi-sensor images, which bring out difficulties on the image registration for multi-sensor images. To cope with this problem, mutual information is adopted to measure correspondence of features and to select faithful points. An update algorithm for projective transform is also proposed. Experimental results show that the proposed method provides robust and accurate registration results.

A Fast Ground Segmentation Method for 3D Point Cloud

  • Chu, Phuong;Cho, Seoungjae;Sim, Sungdae;Kwak, Kiho;Cho, Kyungeun
    • Journal of Information Processing Systems
    • /
    • v.13 no.3
    • /
    • pp.491-499
    • /
    • 2017
  • In this study, we proposed a new approach to segment ground and nonground points gained from a 3D laser range sensor. The primary aim of this research was to provide a fast and effective method for ground segmentation. In each frame, we divide the point cloud into small groups. All threshold points and start-ground points in each group are then analyzed. To determine threshold points we depend on three features: gradient, lost threshold points, and abnormalities in the distance between the sensor and a particular threshold point. After a threshold point is determined, a start-ground point is then identified by considering the height difference between two consecutive points. All points from a start-ground point to the next threshold point are ground points. Other points are nonground. This process is then repeated until all points are labelled.

Multi-point Flexible Touch Sensor Based on Capacitor Structure Using Thin Copper-Plated Polyimide Film for Textile Applications

  • Lee, Junheon;Kim, Taekyeong
    • Textile Coloration and Finishing
    • /
    • v.31 no.2
    • /
    • pp.65-76
    • /
    • 2019
  • A multi-point touch input sensor having different sizes or different capacitance touch points connected by only one pair of signal transmission lines was fabricated using a polyimide film coated with a thin copper plate. The capacitance increases with the decrease in the number of sheets of fabric spacers placed between the two sheets of the polyimide film. Therefore, the touch input sensor could be manufactured without fabric spacers, which was possible by the action of the polyimide film as a dielectric material in the capacitor. On the multi-point touch sensor, higher capacitance was obtained when pressing wider-area touch points with 10mm to 25mm diameter on average. However, the capacitance of a system comprising two sheets of touch sensors was considerably low, causing a serious overlap of the capacitance values according to the data collected from the reliability test. Although the capacitance values could be increased by stacking several sheets of touch sensors, the overlap of data was still observed. After reducing the size of all touch points to 10mm and stacking up to eight sheets of sensors, reliable and consistent capacitance data was obtained. Five different capacitance signals could be induced in the sensors by pushing touch points simultaneously.

Pose Tracking of Moving Sensor using Monocular Camera and IMU Sensor

  • Jung, Sukwoo;Park, Seho;Lee, KyungTaek
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3011-3024
    • /
    • 2021
  • Pose estimation of the sensor is important issue in many applications such as robotics, navigation, tracking, and Augmented Reality. This paper proposes visual-inertial integration system appropriate for dynamically moving condition of the sensor. The orientation estimated from Inertial Measurement Unit (IMU) sensor is used to calculate the essential matrix based on the intrinsic parameters of the camera. Using the epipolar geometry, the outliers of the feature point matching are eliminated in the image sequences. The pose of the sensor can be obtained from the feature point matching. The use of IMU sensor can help initially eliminate erroneous point matches in the image of dynamic scene. After the outliers are removed from the feature points, these selected feature points matching relations are used to calculate the precise fundamental matrix. Finally, with the feature point matching relation, the pose of the sensor is estimated. The proposed procedure was implemented and tested, comparing with the existing methods. Experimental results have shown the effectiveness of the technique proposed in this paper.

Investigation of Sensor Models for Precise Geolocation of GOES-9 Images

  • Hur Dongseok;Lee Tae-Yoon;Kim Taejung
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.91-94
    • /
    • 2005
  • A numerical formula that presents relationship between a point of a satellite image and its ground position is called a sensor model. For precise geolocation of satellite images, we need an error-free sensor model. However, the sensor model based on GOES ephemeris data has some error, in particular after Image Motion Compensation (IMC) mechanism has been turned off. To solve this problem, we investigate three sensor models: Collinearity model, Direct Linear Transform (DLT) model and Orbit-based model. We apply matching between GOES images and global coastline database and use successful results as control points. With control points we improve the initial image geolocation accuracy using the three models. We compare results from three sensor models that are applied to GOES-9 images. As a result, a suitable sensor model for precise geolocation of GOES-9 images is proposed.

  • PDF

A Study on the Image Processing of Visual Sensor for Weld Seam Tracking in GMA Welding (GMA 용접에서 용접선 추적용 시각센서의 화상처리에 관한 연구)

  • 정규철;김재웅
    • Journal of Welding and Joining
    • /
    • v.18 no.3
    • /
    • pp.60-67
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in GMA welding. The visual sensor consists of a CCD camera, a diode laser system with a cylindrical lens and a band-pass-filter to overcome the degrading of image due to spatters and/or arc light. To obtain weld joint position and edge points accurately from the captured image, we compared Hough transform method with central difference method. As a result, we present Hough transform method can more accurately extract the points and it can be applied to real time weld seam tracking. Image processing is carried out to extract straight lines that express laser stripe. After extracting the lines, weld joint position and edge points is determined by intersecting points of the lines. Although a spatter trace is in the image, it is possible to recognize the position of weld joint. Weld seam tracking was precisely implemented with adopting Hough transform method, and it is possible to track the weld seam in the case of offset angle is in the region of $\pm15^{\circ}$.

  • PDF