• Title/Summary/Keyword: spatial interpolation algorithm

Search Result 83, Processing Time 0.025 seconds

An analysis of the moving speed effect of the receiver array on the passive synthetic aperture signal processing (수동형 합성개구 신호처리에서 수신 배열 센서의 이동 속도에 대한 영향 분석)

  • Kim, Sea-Moon;Byun, Sung-Hoon;Oh, Sehyun
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.2
    • /
    • pp.125-133
    • /
    • 2016
  • In order to obtain high-resolution seafloor images, research on SA (Synthetic Aperture) processing and the development of related underwater systems have been performed in many countries. Recently the SA processing is also recognized as an important technique in Korea and researchers started related basic study. However, most previous studies ignored the Doppler effect by a moving receiver array. In this paper reconstructed SAS (Synthetic Aperture Sonar) images and position errors are analyzed according to the speed of a moving array for understanding its moving effect on the SAS images. In the analysis the spatial frequency domain interpolation algorithm is used. The results show that as the moving speed of the array increases the estimated position error also increases and image distortion gets worse when we do not consider the array motion. However, if the compensated receiver signals considering the array motion are used the position error and image distortion can be eliminated. In conclusion a signal processing scheme which compensates the Doppler effect is necessary especially in the condition where the array speed is over 1 m/s.

Restoration of Missing Data in Satellite-Observed Sea Surface Temperature using Deep Learning Techniques (딥러닝 기법을 활용한 위성 관측 해수면 온도 자료의 결측부 복원에 관한 연구)

  • Won-Been Park;Heung-Bae Choi;Myeong-Soo Han;Ho-Sik Um;Yong-Sik Song
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.6
    • /
    • pp.536-542
    • /
    • 2023
  • Satellites represent cutting-edge technology, of ering significant advantages in spatial and temporal observations. National agencies worldwide harness satellite data to respond to marine accidents and analyze ocean fluctuations effectively. However, challenges arise with high-resolution satellite-based sea surface temperature data (Operational Sea Surface Temperature and Sea Ice Analysis, OSTIA), where gaps or empty areas may occur due to satellite instrumentation, geographical errors, and cloud cover. These issues can take several hours to rectify. This study addressed the issue of missing OSTIA data by employing LaMa, the latest deep learning-based algorithm. We evaluated its performance by comparing it to three existing image processing techniques. The results of this evaluation, using the coefficient of determination (R2) and mean absolute error (MAE) values, demonstrated the superior performance of the LaMa algorithm. It consistently achieved R2 values of 0.9 or higher and kept MAE values under 0.5 ℃ or less. This outperformed the traditional methods, including bilinear interpolation, bicubic interpolation, and DeepFill v1 techniques. We plan to evaluate the feasibility of integrating the LaMa technique into an operational satellite data provision system.

Object Detection and 3D Position Estimation based on Stereo Vision (스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정)

  • Son, Haengseon;Lee, Seonyoung;Min, Kyoungwon;Seo, Seongjin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.318-324
    • /
    • 2017
  • We introduced a stereo camera on the aircraft to detect flight objects and to estimate the 3D position of them. The Saliency map algorithm based on PCT was proposed to detect a small object between clouds, and then we processed a stereo matching algorithm to find out the disparity between the left and right camera. In order to extract accurate disparity, cost aggregation region was used as a variable region to adapt to detection object. In this paper, we use the detection result as the cost aggregation region. In order to extract more precise disparity, sub-pixel interpolation is used to extract float type-disparity at sub-pixel level. We also proposed a method to estimate the spatial position of an object by using camera parameters. It is expected that it can be applied to image - based object detection and collision avoidance system of autonomous aircraft in the future.

Depth Image Upsampling Algorithm Using Selective Weight (선택적 가중치를 이용한 깊이 영상 업샘플링 알고리즘)

  • Shin, Soo-Yeon;Kim, Dong-Myung;Suh, Jae-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.7
    • /
    • pp.1371-1378
    • /
    • 2017
  • In this paper, we present an upsampling technique for depth map image using selective bilateral weights and a color weight using laplacian function. These techniques prevent color texture copy problem, which problem appears in existing upsamplers uses bilateral weight. First, we construct a high-resolution image using the bicubic interpolation technique. Next, we detect a color texture region using pixel value differences of depth and color image. If an interpolated pixel belongs to the color texture edge region, we calculate weighting values of spatial and depth in $3{\times}3$ neighboring pixels and compute the cost value to determine the boundary pixel value. Otherwise we use color weight instead of depth weight. Finally, the pixel value having minimum cost is determined as the pixel value of the high-resolution depth image. Simulation results show that the proposed algorithm achieves good performance in terns of PSNR comparison and subjective visual quality.

Super Resolution by Learning Sparse-Neighbor Image Representation (Sparse-Neighbor 영상 표현 학습에 의한 초해상도)

  • Eum, Kyoung-Bae;Choi, Young-Hee;Lee, Jong-Chan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.2946-2952
    • /
    • 2014
  • Among the Example based Super Resolution(SR) techniques, Neighbor embedding(NE) has been inspired by manifold learning method, particularly locally linear embedding. However, the poor generalization of NE decreases the performance of such algorithm. The sizes of local training sets are always too small to improve the performance of NE. We propose the Learning Sparse-Neighbor Image Representation baesd on SVR having an excellent generalization ability to solve this problem. Given a low resolution image, we first use bicubic interpolation to synthesize its high resolution version. We extract the patches from this synthesized image and determine whether each patch corresponds to regions with high or low spatial frequencies. After the weight of each patch is obtained by our method, we used to learn separate SVR models. Finally, we update the pixel values using the previously learned SVRs. Through experimental results, we quantitatively and qualitatively confirm the improved results of the proposed algorithm when comparing with conventional interpolation methods and NE.

Automatic 3D soil model generation for southern part of the European side of Istanbul based on GIS database

  • Sisman, Rafet;Sahin, Abdurrahman;Hori, Muneo
    • Geomechanics and Engineering
    • /
    • v.13 no.6
    • /
    • pp.893-906
    • /
    • 2017
  • Automatic large scale soil model generation is very critical stage for earthquake hazard simulation of urban areas. Manual model development may cause some data losses and may not be effective when there are too many data from different soil observations in a wide area. Geographic information systems (GIS) for storing and analyzing spatial data help scientists to generate better models automatically. Although the original soil observations were limited to soil profile data, the recent developments in mapping technology, interpolation methods, and remote sensing have provided advanced soil model developments. Together with advanced computational technology, it is possible to handle much larger volumes of data. The scientists may solve difficult problems of describing the spatial variation of soil. In this study, an algorithm is proposed for automatic three dimensional soil and velocity model development of southern part of the European side of Istanbul next to Sea of Marmara based on GIS data. In the proposed algorithm, firstly bedrock surface is generated from integration of geological and geophysical measurements. Then, layer surface contacts are integrated with data gathered in vertical borings, and interpolations are interpreted on sections between the borings automatically. Three dimensional underground geology model is prepared using boring data, geologic cross sections and formation base contours drawn in the light of these data. During the preparation of the model, classification studies are made based on formation models. Then, 3D velocity models are developed by using geophysical measurements such as refraction-microtremor, array microtremor and PS logging. The soil and velocity models are integrated and final soil model is obtained. All stages of this algorithm are carried out automatically in the selected urban area. The system directly reads the GIS soil data in the selected part of urban area and 3D soil model is automatically developed for large scale earthquake hazard simulation studies.

Optimization of PRISM parameters using the SCEM-UA algorithm for gridded daily time series precipitation (시계열 강수량 공간화를 위한 SCEM-UA 기반의 PRISM 매개변수 최적화)

  • Kim, Yong-Tak;Park, Moonhyung;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.10
    • /
    • pp.903-915
    • /
    • 2020
  • Long-term high-resolution hydro-meteorological data has been recognized as an essential element in establishing the water resources plan. The increasing demand for spatial precipitation in various areas such as climate, hydrology, geography, ecology, and environment is apparent. However, potential limitations of the existing area-weighted and numerical interpolation methods for interpolating precipitation in high altitude areas remains less explored. The proposed PRISM (Precipitation-Elevation Regressions on Independent Slopes Model) model can produce gridded precipitation that can adequately consider topographic characteristics (e.g., slope and altitude), which are not substantially included in the existing interpolation techniques. In this study, the PRISM model was optimized with SCEM-UA (Shuffled Complex Evolution Metropolis-University of Arizona) to produce daily gridded precipitation. As a result, the minimum impact radius was calculated 9.10 km and the maximum 34.99 km. The altitude of coastal weighted was 681.03 m, the minimum and maximum distances from coastal were 9.85 km and 38.05 km. The distance weighting factor was calculated to be about 0.87, confirming that the PRISM result was very sensitive to distance. The results showed that the proposed PRISM model could reproduce the observed statistical properties reasonably well.

Deep Learning based Estimation of Depth to Bearing Layer from In-situ Data (딥러닝 기반 국내 지반의 지지층 깊이 예측)

  • Jang, Young-Eun;Jung, Jaeho;Han, Jin-Tae;Yu, Yonggyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.3
    • /
    • pp.35-42
    • /
    • 2022
  • The N-value from the Standard Penetration Test (SPT), which is one of the representative in-situ test, is an important index that provides basic geological information and the depth of the bearing layer for the design of geotechnical structures. In the aspect of time and cost-effectiveness, there is a need to carry out a representative sampling test. However, the various variability and uncertainty are existing in the soil layer, so it is difficult to grasp the characteristics of the entire field from the limited test results. Thus the spatial interpolation techniques such as Kriging and IDW (inverse distance weighted) have been used for predicting unknown point from existing data. Recently, in order to increase the accuracy of interpolation results, studies that combine the geotechnics and deep learning method have been conducted. In this study, based on the SPT results of about 22,000 holes of ground survey, a comparative study was conducted to predict the depth of the bearing layer using deep learning methods and IDW. The average error among the prediction results of the bearing layer of each analysis model was 3.01 m for IDW, 3.22 m and 2.46 m for fully connected network and PointNet, respectively. The standard deviation was 3.99 for IDW, 3.95 and 3.54 for fully connected network and PointNet. As a result, the point net deep learing algorithm showed improved results compared to IDW and other deep learning method.

Accuracy Enhancement using Network Based GPS Carrier Phase Differential Positioning (네트워크 기반의 GPS 반송파 상대측위 정확도 향상)

  • Lee, Yong-Wook;Bae, Kyoung-Ho
    • Spatial Information Research
    • /
    • v.15 no.2
    • /
    • pp.111-121
    • /
    • 2007
  • The GPS positioning offer 3D position using code and carrier phase measurements, but the user can obtain the precise accuracy positioning using carrier phase in Real Time Kinematic(RTK). The main problem, which RTK have to overcome, is the necessary to have a reference station(RS) when using RTK should be generally no more than 10km on average, which is significantly different from DGPS, where distances to RS can exceed several hundred kilometers. The accuracy of today's RTK is limited by the distance dependent errors from orbit, ionosphere and troposphere as well as station dependent influences like multipath and antenna phase center variations. For these reasons, the author proposes Network based GPS Carrier Phase Differential Positioning using Multiple RS which is detached from user receiver about 30km. An important part of the proposed system is algorithm and software development, named DAUNet. The main process is corrections computation, corrections interpolation and searching for the integer ambiguity. Corrections computation of satellite by satellite and epoch by epoch at each reference station are calculated by a Functional model and Stochastic model based on a linear combination algorithm and corrections interpolation at user receiver are used by area correction parameters. As results, the users can obtain the cm-level positioning.

  • PDF

Task Balancing Scheme of MPI Gridding for Large-scale LiDAR Data Interpolation (대용량 LiDAR 데이터 보간을 위한 MPI 격자처리 과정의 작업량 발란싱 기법)

  • Kim, Seon-Young;Lee, Hee-Zin;Park, Seung-Kyu;Oh, Sang-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.9
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose MPI gridding algorithm of LiDAR data that minimizes the communication between the cores. The LiDAR data collected from aircraft is a 3D spatial information which is used in various applications. Since there are many cases where the LiDAR data has too high resolution than actually required or non-surface information is included in the data, filtering the raw LiDAR data is required. In order to use the filtered data, the interpolation using the data structure to search adjacent locations is conducted to reconstruct the data. Since the processing time of LiDAR data is directly proportional to the size of it, there have been many studies on the high performance parallel processing system using MPI. However, previously proposed methods in parallel approach possess possible performance degradations such as imbalanced data size among cores or communication overhead for resolving boundary condition inconsistency. We conduct empirical experiments to verify the effectiveness of our proposed algorithm. The results show that the total execution time of the proposed method decreased up to 4.2 times than that of the conventional method on heterogeneous clusters.