• Title/Summary/Keyword: Sampling point

Search Result 827, Processing Time 0.02 seconds

A Study on the Point-Mass Filter for Nonlinear State-Space Models (비선형 상태공간 모델을 위한 Point-Mass Filter 연구)

  • Yeongkwon Choe
    • Journal of Industrial Technology
    • /
    • v.43 no.1
    • /
    • pp.57-62
    • /
    • 2023
  • In this review, we introduce the non-parametric Bayesian filtering algorithm known as the point-mass filter (PMF) and discuss recent studies related to it. PMF realizes Bayesian filtering by placing a deterministic grid on the state space and calculating the probability density at each grid point. PMF is known for its robustness and high accuracy compared to other nonparametric Bayesian filtering algorithms due to its uniform sampling. However, a drawback of PMF is its inherently high computational complexity in the prediction phase. In this review, we aim to understand the principles of the PMF algorithm and the reasons for the high computational complexity, and summarize recent research efforts to overcome this challenge. We hope that this review contributes to encouraging the consideration of PMF applications for various systems.

LiDAR Measurement Analysis in Range Domain

  • Sooyong Lee
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.4
    • /
    • pp.187-195
    • /
    • 2024
  • Light detection and ranging (LiDAR), a widely used sensor in mobile robots and autonomous vehicles, has its most important function as measuring the range of objects in three-dimensional space and generating point clouds. These point clouds consist of the coordinates of each reflection point and can be used for various tasks, such as obstacle detection and environment recognition. However, several processing steps are required, such as three-dimensional modeling, mesh generation, and rendering. Efficient data processing is crucial because LiDAR provides a large number of real-time measurements with high sampling frequencies. Despite the rapid development of controller computational power, simplifying the computational algorithm is still necessary. This paper presents a method for estimating the presence of curbs, humps, and ground tilt using range measurements from a single horizontal or vertical scan instead of point clouds. These features can be obtained by data segmentation based on linearization. The effectiveness of the proposed algorithm was verified by experiments in various environments.

Study on the Real Time Medical Image Processing (실시간 의학 영상 처리에 관한 연구)

  • 유선국;이건기
    • Journal of Biomedical Engineering Research
    • /
    • v.8 no.2
    • /
    • pp.118-122
    • /
    • 1987
  • The medical image processing system is intended for a diverse set of users in the medical Imaging Parts. This system consists of a 640 Kbyte IBM-PC/AT with 30 Mbyte hard disk, special purpose image processor with video input devices and display monitor. Image may be recorded and processed in real time at sampling rate up to 10 MHz. This system provides a wide range of image enhancement processing facilities via a menu-driven software packages. These facilities include point by point processing, image averaging, convolution filter and subtraction.

  • PDF

Design of a 12b SAR ADC for DMPPT Control in a Photovoltaic System

  • Rho, Sung-Chan;Lim, Shin-Il
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.4 no.3
    • /
    • pp.189-193
    • /
    • 2015
  • This paper provides the design techniques of a successive approximation register (SAR) type 12b analog-to-digital converter (ADC) for distributed maximum power point tracking (DMPPT) control in a photovoltaic system. Both a top-plate sampling technique and a $V_{CM}$-based switching technique are applied to the 12b capacitor digital-to-analog converter (CDAC). With these techniques, we can implement a 12b SAR ADC with a 10b capacitor array digital-to-analog converter (DAC). To enhance the accuracy of the ADC, a single-to-differential converted DAC is exploited with the dual sampling technique during top-plate sampling. Simulation results show that the proposed ADC can achieve a signal-to-noise plus distortion ratio (SNDR) of 70.8dB, a spurious free dynamic range (SFDR) of 83.3dB and an effective number of bits (ENOB) of 11.5b with bipolar CMOS LDMOD (BCDMOS) $0.35{\mu}m$ technology. Total power consumption is 115uW under a supply voltage of 3.3V at a sampling frequency of 1.25MHz. And the figure of merit (FoM) is 32.68fJ/conversion-step.

Probabilistic Evaluation of Voltage Quality on Distribution System Containing Distributed Generation and Electric Vehicle Charging Load

  • CHEN, Wei;YAN, Hongqiang;PEI, Xiping
    • Journal of Electrical Engineering and Technology
    • /
    • v.12 no.5
    • /
    • pp.1743-1753
    • /
    • 2017
  • Since there are multiple random variables in the probabilistic load flow (PLF) calculation of distribution system containing distributed generation (DG) and electric vehicle charging load (EVCL), a Monte Carlo method based on composite sampling method is put forward according to the existing simple random sampling Monte Carlo simulation method (SRS-MCSM) to perform probabilistic assessment analysis of voltage quality of distribution system containing DG and EVCL. This method considers not only the randomness of wind speed and light intensity as well as the uncertainty of basic load and EVCL, but also other stochastic disturbances, such as the failure rate of the transmission line. According to the different characteristics of random factors, different sampling methods are applied. Simulation results on IEEE9 bus system and IEEE34 bus system demonstrates the validity, accuracy, rapidity and practicability of the proposed method. In contrast to the SRS-MCSM, the proposed method is of higher computational efficiency and better simulation accuracy. The variation of nodal voltages for distribution system before and after connecting DG and EVCL is compared and analyzed, especially the voltage fluctuation of the grid-connected point of DG and EVCL.

Development of a Time-selective Self-triggering Water Sampler and Its Application to In-situ Calibration of a Turbidity Sensor

  • Jin, Jae-Youll;Hwang, Keun-Choon;Park, Jin-Soon;Yum, Ki-Dai;Oh, Jae-Kyung
    • Journal of the korean society of oceanography
    • /
    • v.34 no.4
    • /
    • pp.200-206
    • /
    • 1999
  • Seawater sampling is the primary task for the study of the marine environmental parameters that require shipboard or laboratory experiments for their analyses, and is also required for the calibration of some instruments for in situ measurement. A new automatic bottle (AUTTLE) is developed for seawater sampling at any desired time and water depth by self-triggering. Both any type of single or assembled mooring for 15 days and manual actuation by using a remote messenger as existing instantaneous single point water samplers are possible. Its sampling capacity and the resolution of time setting are 2 liters and 1 second, respectively. The result of a field experiment with an optical backscattering sensor (OBS) and a total of 14 AUTTLES for the in situ calibration of the OBS shows that the AUTTLE must improve our understanding on the behavior of the sand/mud mixtures in the environments with high waves and strong tides. The AUTTLE will serve as a valuable instrument in the various fields of oceanography, especially where synchronized seawater sampling at several sites is required and/or the information in storm period is important.

  • PDF

Self-adaptive sampling for sequential surrogate modeling of time-consuming finite element analysis

  • Jin, Seung-Seop;Jung, Hyung-Jo
    • Smart Structures and Systems
    • /
    • v.17 no.4
    • /
    • pp.611-629
    • /
    • 2016
  • This study presents a new approach of surrogate modeling for time-consuming finite element analysis. A surrogate model is widely used to reduce the computational cost under an iterative computational analysis. Although a variety of the methods have been widely investigated, there are still difficulties in surrogate modeling from a practical point of view: (1) How to derive optimal design of experiments (i.e., the number of training samples and their locations); and (2) diagnostics of the surrogate model. To overcome these difficulties, we propose a sequential surrogate modeling based on Gaussian process model (GPM) with self-adaptive sampling. The proposed approach not only enables further sampling to make GPM more accurate, but also evaluates the model adequacy within a sequential framework. The applicability of the proposed approach is first demonstrated by using mathematical test functions. Then, it is applied as a substitute of the iterative finite element analysis to Monte Carlo simulation for a response uncertainty analysis under correlated input uncertainties. In all numerical studies, it is successful to build GPM automatically with the minimal user intervention. The proposed approach can be customized for the various response surfaces and help a less experienced user save his/her efforts.

Digital Image Processing Using Non-separable High Density Discrete Wavelet Transformation (비분리 고밀도 이산 웨이브렛 변환을 이용한 디지털 영상처리)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.9 no.1
    • /
    • pp.165-176
    • /
    • 2013
  • This paper introduces the high density discrete wavelet transform using quincunx sampling, which is a discrete wavelet transformation that combines the high density discrete transformation and non-separable processing method, each of which has its own characteristics and advantages. The high density discrete wavelet transformation is one that expands an N point signal to M transform coefficients with M > N. The high density discrete wavelet transformation is a new set of dyadic wavelet transformation with two generators. The construction provides a higher sampling in both time and frequency. This new transform is approximately shift-invariant and has intermediate scales. In two dimensions, this transform outperforms the standard discrete wavelet transformation in terms of shift-invariant. Although the transformation utilizes more wavelets, sampling rates are high costs and some lack a dominant spatial orientation, which prevents them from being able to isolate those directions. A solution to this problem is a non separable method. The quincunx lattice is a non-separable sampling method in image processing. It treats the different directions more homogeneously than the separable two dimensional schemes. Proposed wavelet transformation can generate sub-images of multiple degrees rotated versions. Therefore, This method services good performance in image processing fields.

A study on the Forest inventory work (삼림자원조사법(森林資源調査法)의 연구(硏究))

  • Kim, Kap Duk
    • Journal of Korean Society of Forest Science
    • /
    • v.5 no.1
    • /
    • pp.10-15
    • /
    • 1966
  • 1) The purpose of this study was to compare the forest survey by ground method with that by aerial photo method. 2) In this study, the forest type map was made by use of the radial line plotter and radial line triangulation method. 3) The difference between the area found by the forest type map above mentioned and that by compass surveying on the ground was none-significant. 4) On aerial photo the stratification was carried out very easily. 5) The following sampling methods were applied : line plot method, representative sampling method and stratified random sampling on the aerial photo. 6) In confirming sampling point the line plot method and the representative sampling method were easier than another. 7) As to stands volume the maximum value was given by stratification, and the minimum by line plot method.

  • PDF

A Note on the Decision of Sample Size by Relative Standard Error in Successive Occasions (계속조사에서 상대표준오차를 이용한 표본크기 결정에 관한 고찰)

  • Han, GeunShik;Lee, Gi-Sung
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.3
    • /
    • pp.477-483
    • /
    • 2015
  • This study deals with the decision problem of sample size by the relative standard error of estimates derived from survey results in successive occasions. The population of the construction in business survey results is used to calculate quartile of the relative standard error of the 1,000 sample obtained from simple or stratified random sampling. The sample size at time t with a relative standard error of the point (t-1) in the successive occasions were calculated according to the sampling method. As a result, in terms of the sample size according to the size of the relative standard error of the (t-1), simple random sampling differs significantly from stratified sampling. In addition, we could see differences in sample size (depending on how the population is stratified) and that careful attention is required in the problem of sample size by the relative standard error of estimates derived from survey results in successive occasions.