• Title/Summary/Keyword: Spatial smoothing

Search Result 89, Processing Time 0.024 seconds

Estimating Three-Dimensional Scattering Centers of a Target Using the 3D MEMP Method in Radar Target Recognition (레이다 표적 인식에서 3D MEMP 기법을 이용한 표적의 3차원 산란점 예측)

  • Shin, Seung-Yong;Myung, Noh-Hoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.19 no.2
    • /
    • pp.130-137
    • /
    • 2008
  • This paper presents high resolution techniques of three-dimensional(3D) scattering center extraction for a radar backscattered signal in radar target recognition. We propose a 3D pairing procedure, a new approach to estimate 3D scattering centers. This pairing procedure is more accurate and robust than the general criterion. 3D MEMP(Matrix Enhancement and Matrix Pencil) with the 3D pairing procedure first creates an autocorrelation matrix from radar backscattered field data samples. A matrix pencil method is then used to extract 3D scattering centers from the principal eigenvectors of the autocorrelation matrix. An autocorrelation matrix is constructed by the MSSP(modified spatial smoothing preprocessing) method. The observation matrix required for estimation of 3D scattering center locations is built using the sparse scanning order conception. In order to demonstrate the performance of the proposed technique, we use backscattered field data generated by ideal point scatterers.

A Discontinuity feature Enhancement Filter Using DCT fuzziness (DCT블록의 애매성을 이용한 불연속특징 향상 필터)

  • Kim, Tae-Yong
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.8
    • /
    • pp.1069-1079
    • /
    • 2005
  • Though there have been many methods to detect features in spatial domain, in the case of a compressed image it has to be decoded, processed and encoded again. Alternatively, we can manipulate a compressed image directly in the Discrete Cosine Transform (DCT) domain that has been used for compressing videos or images in the standards like MPEG and JPEG. In our previous work we proposed a model-based discontinuity evaluation technique in the DCT domain that had problems in the rotated or non-ideal discontinuities. In this paper, we propose a fuzzy filtering technique that consists of height fuzzification, direction fuzzification, and forty filtering of discontinuities. The enhancement achieved by the fuzzy tittering includes the linking, thinning, and smoothing of discontinuities in the DCT domain. Although the detected discontinuities are rough in a low-resolution image for the size (8${\times}$8 pixels) of the DCT block, experimental results show that this technique is fast and stable to enhance the qualify of discontinuities.

  • PDF

The history of high intensity rainfall estimation methods in New Zealand and the latest High Intensity Rainfall Design System (HIRDS.V3)

  • Horrell, Graeme;Pearson, Charles
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.16-16
    • /
    • 2011
  • Statistics of extreme rainfall play a vital role in engineering practice from the perspective of mitigation and protection of infrastructure and human life from flooding. While flood frequency assessments, based on river flood flow data are preferred, the analysis of rainfall data is often more convenient due to the finer spatial nature of rainfall recording networks, often with longer records, and potentially more easily transferable from site to site. The rainfall frequency analysis as a design tool has developed over the years in New Zealand from Seelye's daily rainfall frequency maps in 1947 to Thompson's web based tool in 2010. This paper will present a history of the development of New Zealand rainfall frequency analysis methods, and the details of the latest method, so that comparisons may in future be made with the development of Korean methods. One of the main findings in the development of methods was new knowledge on the distribution of New Zealand rainfall extremes. The High Intensity Rainfall Design System (HIRDS.V3) method (Thompson, 2011) is based upon a regional rainfall frequency analysis with the following assumptions: $\bullet$ An "index flood" rainfall regional frequency method, using the median annual maximum rainfall as the indexing variable. $\bullet$ A regional dimensionless growth curve based on the Generalised Extreme Value (GEV), and using goodness of fit test for the GEV, Gumbel (EV1), and Generalised Logistic (GLO) distributions. $\bullet$ Mapping of median annual maximum rainfall and parameters of the regional growth curves, using thin-plate smoothing splines, a $2km\times2km$ grid, L moments statistics, 10 durations from 10 minutes to 72 hours, and a maximum Average Recurrence Interval of 100 years.

  • PDF

Direction of arrival estimation of non-Gaussian signals for nested arrays: Applying fourth-order difference co-array and the successive method

  • Ye, Changbo;Chen, Weiyang;Zhu, Beizuo;Tang, Leiming
    • ETRI Journal
    • /
    • v.43 no.5
    • /
    • pp.869-880
    • /
    • 2021
  • Herein, we estimate the direction of arrival (DOA) of non-Gaussian signals for nested arrays (NAs) by implementing the fourth-order difference co-array (FODC) and successive methods. In particular, considering the property of the fourth-order cumulant (FOC), we first construct the FODC of the NA, which can obtain O(N4) virtual elements using N physical sensors, whereas conventional FOC methods can only obtain O(N2) virtual elements. In addition, the closed-form expression of FODC is presented to verify the enhanced degrees of freedom (DOFs). Subsequently, we exploit the vectorized FOC (VFOC) matrix to match the FODC of the NA. Notably, the VFOC matrix is a single snapshot vector, and the initial DOA estimates can be obtained via the discrete Fourier transform method under the underdetermined correlation matrix condition, which utilizes the complete DOFs of the FODC. Finally, fine estimates are obtained through the spatial smoothing-Capon method with partial spectrum searching. Numerical simulation verifies the effectiveness and superiority of the proposed method.

Theoretical Investigations on Compatibility of Feedback-Based Cellular Models for Dune Dynamics : Sand Fluxes, Avalanches, and Wind Shadow ('되먹임 기반' 사구 역학 모형의 호환 가능성에 대한 이론적 고찰 - 플럭스, 사면조정, 바람그늘 문제를 중심으로 -)

  • RHEW, Hosahng
    • Journal of the Korean association of regional geographers
    • /
    • v.22 no.3
    • /
    • pp.681-702
    • /
    • 2016
  • Two different modelling approaches to dune dynamics have been established thus far; continuous models that emphasize the precise representation of wind field, and feedback-based models that focus on the interactions between dunes, rather than aerodynamics. Though feedback-based models have proven their capability to capture the essence of dune dynamics, the compatibility issues on these models have less been addressed. This research investigated, mostly from the theoretical point of view, the algorithmic compatibility of three feedback-based dune models: sand slab models, Nishimori model, and de Castro model. Major findings are as follows. First, sand slab models and de Castro model are both compatible in terms of flux perspectives, whereas Nishimori model needs a tuning factor. Second, the algorithm of avalanching can be easily implemented via repetitive spatial smoothing, showing high compatibility between models. Finally, the wind shadow rule might not be a necessary component to reproduce dune patterns unlike the interpretation or assumption of previous studies. The wind shadow rule, rather, might be more important in understanding bedform-level interactions. Overall, three models show high compatibility between them, or seem to require relatively small modification, though more thorough investigation is needed.

  • PDF

Seismic interval velocity analysis on prestack depth domain for detecting the bottom simulating reflector of gas-hydrate (가스 하이드레이트 부존층의 하부 경계면을 규명하기 위한 심도영역 탄성파 구간속도 분석)

  • Ko Seung-Won;Chung Bu-Heung
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.638-642
    • /
    • 2005
  • For gas hydrate exploration, long offset multichannel seismic data acquired using by the 4km streamer length in Ulleung basin of the East Sea. The dataset was processed to define the BSRs (Bottom Simulating Reflectors) and to estimate the amount of gas hydrates. Confirmation of the presence of Bottom Simulating reflectors (BSR) and investigation of its physical properties from seismic section are important for gas hydrate detection. Specially, faster interval velocity overlying slower interval velocity indicates the likely presences of gas hydrate above BSR and free gas underneath BSR. In consequence, estimation of correct interval velocities and analysis of their spatial variations are critical processes for gas hydrate detection using seismic reflection data. Using Dix's equation, Root Mean Square (RMS) velocities can be converted into interval velocities. However, it is not a proper way to investigate interval velocities above and below BSR considering the fact that RMS velocities have poor resolution and correctness and the assumption that interval velocities increase along the depth. Therefore, we incorporated Migration Velocity Analysis (MVA) software produced by Landmark CO. to estimate correct interval velocities in detail. MVA is a process to yield velocities of sediments between layers using Common Mid Point (CMP) gathered seismic data. The CMP gathered data for MVA should be produced after basic processing steps to enhance the signal to noise ratio of the first reflections. Prestack depth migrated section is produced using interval velocities and interval velocities are key parameters governing qualities of prestack depth migration section. Correctness of interval velocities can be examined by the presence of Residual Move Out (RMO) on CMP gathered data. If there is no RMO, peaks of primary reflection events are flat in horizontal direction for all offsets of Common Reflection Point (CRP) gathers and it proves that prestack depth migration is done with correct velocity field. Used method in this study, Tomographic inversion needs two initial input data. One is the dataset obtained from the results of preprocessing by removing multiples and noise and stacked partially. The other is the depth domain velocity model build by smoothing and editing the interval velocity converted from RMS velocity. After the three times iteration of tomography inversion, Optimum interval velocity field can be fixed. The conclusion of this study as follow, the final Interval velocity around the BSR decreased to 1400 m/s from 2500 m/s abruptly. BSR is showed about 200m depth under the seabottom

  • PDF

Radar rainfall prediction based on deep learning considering temporal consistency (시간 연속성을 고려한 딥러닝 기반 레이더 강우예측)

  • Shin, Hongjoon;Yoon, Seongsim;Choi, Jaemin
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.5
    • /
    • pp.301-309
    • /
    • 2021
  • In this study, we tried to improve the performance of the existing U-net-based deep learning rainfall prediction model, which can weaken the meaning of time series order. For this, ConvLSTM2D U-Net structure model considering temporal consistency of data was applied, and we evaluated accuracy of the ConvLSTM2D U-Net model using a RainNet model and an extrapolation-based advection model. In addition, we tried to improve the uncertainty in the model training process by performing learning not only with a single model but also with 10 ensemble models. The trained neural network rainfall prediction model was optimized to generate 10-minute advance prediction data using four consecutive data of the past 30 minutes from the present. The results of deep learning rainfall prediction models are difficult to identify schematically distinct differences, but with ConvLSTM2D U-Net, the magnitude of the prediction error is the smallest and the location of rainfall is relatively accurate. In particular, the ensemble ConvLSTM2D U-Net showed high CSI, low MAE, and a narrow error range, and predicted rainfall more accurately and stable prediction performance than other models. However, the prediction performance for a specific point was very low compared to the prediction performance for the entire area, and the deep learning rainfall prediction model also had limitations. Through this study, it was confirmed that the ConvLSTM2D U-Net neural network structure to account for the change of time could increase the prediction accuracy, but there is still a limitation of the convolution deep neural network model due to spatial smoothing in the strong rainfall region or detailed rainfall prediction.

Comparison of Forest Carbon Stocks Estimation Methods Using Forest Type Map and Landsat TM Satellite Imagery (임상도와 Landsat TM 위성영상을 이용한 산림탄소저장량 추정 방법 비교 연구)

  • Kim, Kyoung-Min;Lee, Jung-Bin;Jung, Jaehoon
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.5
    • /
    • pp.449-459
    • /
    • 2015
  • The conventional National Forest Inventory(NFI)-based forest carbon stock estimation method is suitable for national-scale estimation, but is not for regional-scale estimation due to the lack of NFI plots. In this study, for the purpose of regional-scale carbon stock estimation, we created grid-based forest carbon stock maps using spatial ancillary data and two types of up-scaling methods. Chungnam province was chosen to represent the study area and for which the $5^{th}$ NFI (2006~2009) data was collected. The first method (method 1) selects forest type map as ancillary data and uses regression model for forest carbon stock estimation, whereas the second method (method 2) uses satellite imagery and k-Nearest Neighbor(k-NN) algorithm. Additionally, in order to consider uncertainty effects, the final AGB carbon stock maps were generated by performing 200 iterative processes with Monte Carlo simulation. As a result, compared to the NFI-based estimation(21,136,911 tonC), the total carbon stock was over-estimated by method 1(22,948,151 tonC), but was under-estimated by method 2(19,750,315 tonC). In the paired T-test with 186 independent data, the average carbon stock estimation by the NFI-based method was statistically different from method2(p<0.01), but was not different from method1(p>0.01). In particular, by means of Monte Carlo simulation, it was found that the smoothing effect of k-NN algorithm and mis-registration error between NFI plots and satellite image can lead to large uncertainty in carbon stock estimation. Although method 1 was found suitable for carbon stock estimation of forest stands that feature heterogeneous trees in Korea, satellite-based method is still in demand to provide periodic estimates of un-investigated, large forest area. In these respects, future work will focus on spatial and temporal extent of study area and robust carbon stock estimation with various satellite images and estimation methods.

Principal component analysis in C[11]-PIB imaging (주성분분석을 이용한 C[11]-PIB imaging 영상분석)

  • Kim, Nambeom;Shin, Gwi Soon;Ahn, Sung Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.12-16
    • /
    • 2015
  • Purpose Principal component analysis (PCA) is a method often used in the neuroimagre analysis as a multivariate analysis technique for describing the structure of high dimensional correlation as the structure of lower dimensional space. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values of linearly independent variables called principal components. In this study, in order to investigate the usefulness of PCA in the brain PET image analysis, we tried to analyze C[11]-PIB PET image as a representative case. Materials and Methods Nineteen subjects were included in this study (normal = 9, AD/MCI = 10). For C[11]-PIB, PET scan were acquired for 20 min starting 40 min after intravenous injection of 9.6 MBq/kg C[11]-PIB. All emission recordings were acquired with the Biograph 6 Hi-Rez (Siemens-CTI, Knoxville, TN) in three-dimensional acquisition mode. Transmission map for attenuation-correction was acquired using the CT emission scans (130 kVp, 240 mA). Standardized uptake values (SUVs) of C[11]-PIB calculated from PET/CT. In normal subjects, 3T MRI T1-weighted images were obtained to create a C[11]-PIB template. Spatial normalization and smoothing were conducted as a pre-processing for PCA using SPM8 and PCA was conducted using Matlab2012b. Results Through the PCA, we obtained linearly uncorrelated independent principal component images. Principal component images obtained through the PCA can simplify the variation of whole C[11]-PIB images into several principal components including the variation of neocortex and white matter and the variation of deep brain structure such as pons. Conclusion PCA is useful to analyze and extract the main pattern of C[11]-PIB image. PCA, as a method of multivariate analysis, might be useful for pattern recognition of neuroimages such as FDG-PET or fMRI as well as C[11]-PIB image.

  • PDF